Scope and Approach

This spec is designed to work for basically everyone, from tiny embedded systems running on a coin battery to massive server farms crunching serious data. We're flexible like that.

All Kinds of Implementations

The language is built to support whatever approach you want to take:

How You Build It

Direct interpreters — Just run the code straight up, no compilation needed. Perfect for scripting, rapid prototyping, or educational environments where you want immediate feedback. Think of a student learning the language in an interactive REPL, typing expressions and seeing results instantly.

Traditional compilers with linkers — The classic approach. Compile, link, ship it. This is your bread-and-butter for production systems. You get full optimization, static linking, the works. Great when you're building software that needs to be deployed and run efficiently.

Optimizing compilers — Go wild with the optimizations. Inline everything. Make it scream. When you're building a high-frequency trading system or a game engine where every microsecond counts, you want a compiler that'll spend hours analyzing your code to squeeze out every bit of performance. The spec is written so aggressive optimizations are possible while preserving correct behavior.

JIT compilers — Compile stuff on the fly while the program's running. Live dangerously. This is cool for environments where you're dynamically loading code, or where you want to optimize based on actual runtime behavior. The spec doesn't mandate how compilation happens, so JIT is totally valid.

Where It Runs

Embedded systems — Little ROM-based microcontrollers with tight constraints. We got you. Picture this: you're coding for a Mars rover with 64KB of RAM and no operating system. Or maybe you're writing firmware for a pacemaker where a crash literally kills someone. The language needs to work in this harsh environment — no dynamic allocation required, deterministic timing, direct hardware access. That's in scope.

Desktop apps — Your typical single-user machine. Classic. Whether it's a text editor, a game, or a media player, desktop applications have moderate resources and expect reasonable performance. The language should make these easy to write without requiring heroic effort or deep systems knowledge for simple tasks.

Server environments — Multi-user, multi-processing, handling tons of concurrent work. Imagine you're building the next big web service handling millions of requests per day across hundreds of cores. You need efficient concurrency, low latency, and the ability to squeeze performance out of that beefy hardware. The spec accommodates this use case — threading, atomics, the whole nine yards.

High-performance computing — When you need every last cycle and you're counting FLOPs. Think weather simulation, computational fluid dynamics, machine learning training on GPU clusters. These applications need to extract maximum performance from specialized hardware, use SIMD instructions, and scale across thousands of cores. The language shouldn't get in your way.

What We're Balancing

Portability

The spec is carefully written so you can write portable code — stuff that'll work on any conforming implementation without changes. To make that happen, we:

Define what must happen. Clear requirements for how things should behave.

Call out implementation-defined stuff. When something can vary, we're explicit about it.

Document the undefined. When behavior isn't specified, we say so. No surprises.

Here's what portable code might look like:

func calculateSum(arr: []int) -> int {
    let sum: int = 0;
    for i in 0..arr.len {
        sum += arr[i];
    }
    return sum;
}

This code makes no assumptions about pointer sizes, endianness, or hardware details. It'll work identically on ARM, x86, RISC-V, whatever. That's the portable path.

Performance

But look, we're not gonna sacrifice performance just to make everything portable. That'd be lame. The language lets you:

Access hardware directly. Get down to the metal when you need to.

Use machine-specific optimizations. If your platform has special sauce, use it.

Control resources explicitly. You decide what gets allocated and when.

Keep overhead minimal. The runtime shouldn't be doing a bunch of stuff behind your back.

Here's what non-portable, performance-optimized code might look like:

func calculateSumSIMD(arr: []int) -> int {
    // x86-specific: use SSE instructions for parallel addition
    @if(target_arch == "x86_64") {
        return calculateSumSSE(arr);
    } @else {
        return calculateSum(arr);  // fallback
    }
}

This version uses x86 SIMD instructions when available, potentially running 4x faster. It's not portable — you're explicitly writing platform-specific code — but the spec doesn't stop you. Your choice, based on your needs.

The Tradeoff in Action

Let's look at a concrete example: reading a 32-bit integer from a byte buffer.

Portable version:

func readInt32Portable(buf: []byte) -> i32 {
    return (buf[0] << 24) | (buf[1] << 16) | (buf[2] << 8) | buf[3];
}

This works on any platform but does four separate operations.

Non-portable, optimized version:

func readInt32Fast(buf: []byte) -> i32 {
    // Assumes little-endian, aligned access is safe
    return (buf.ptr as i32);
}

This is faster (one memory read instead of four) but makes assumptions about endianness and alignment. If you're only targeting x86, that's fine. If you need to run on different architectures, it'll break.

The spec lets you write either. It tells you which is which. You decide.

Flexibility

Different apps need different things, and that's cool:

Safety-critical systems? You might want to use a restricted subset of the language with extra validation. Do it.

Need maximum performance? Go ahead and exploit implementation-specific features. We won't judge.

Want maximum portability? Stick to strictly conforming code. That works too.

System programming? You'll need low-level hardware access. It's there for you.

Real-World Use Cases

Let's tie this all together with some concrete scenarios where different approaches shine:

Case 1: Cross-Platform CLI Tool You're building a command-line utility for text processing that needs to run on Linux, macOS, and Windows. You write completely portable code using only specified behavior. You compile once for each platform, and it just works. The portability aspects of the spec shine here.

Case 2: Embedded Motor Controller You're writing firmware for a motor controller in an electric car. You use a freestanding implementation (no OS), access hardware registers directly, and use careful timing. You enable all safety checks during development but disable them in production for deterministic performance. The flexibility and direct hardware access are crucial here.

Case 3: High-Performance Database You're building a database engine that needs to run fast on modern server hardware. You write mostly portable code, but you use compile-time detection to leverage SSE/AVX on x86 and NEON on ARM for data processing hot paths. The portability/performance balance is key — mostly portable, but optimized where it counts.

Case 4: Operating System Kernel You're writing an OS kernel that needs maximum control. You use inline assembly, direct memory manipulation, and every low-level feature available. Safety features are selectively enabled for parts that can afford the overhead. The "trust the programmer" philosophy is essential here — the language doesn't get in your way.

Case 5: Web Service Backend You're building a high-throughput API server. You use portable code for business logic, platform-specific threading libraries for concurrency, and careful memory management to avoid GC pauses. The mix of portability (business logic) and platform-specific optimization (threading, I/O) lets you get the best of both worlds.

These aren't just theoretical. The spec is designed to handle all of these use cases well, even though they have radically different requirements.

What's In This Spec

We define the important stuff:

1. Language syntax — What valid programs look like.

2. Language semantics — What your code actually means and does.

3. Library facilities — Standard functions everyone can count on.

4. Implementation requirements — What a conforming implementation has to provide.

5. Program requirements — What conforming programs need to do.

What's NOT In This Spec

We intentionally don't mandate:

Compilation strategies. Build it however you want.

Performance guarantees. We're not gonna promise specific speeds (beyond basic complexity stuff).

Development tools. Use whatever IDE, debugger, or tools you like.

Testing methodologies. How you validate implementations is up to you.

Library distribution formats. Ship your source however works for you.

This gives implementors and users room to innovate and adapt to different needs.

How It Fits Into the Environment

The language works within an execution environment, it doesn't try to define one. Here's how that shakes out:

We assume some basics. File systems, standard I/O, that kind of thing. A hosted implementation provides these (think Linux, Windows, macOS). You can read files, write to stdout, get command-line args — the usual stuff.

We define the interface. How the language talks to these services. The spec says "here's how you read a file," not "here's how the OS must implement file reading." That's the OS's problem. We just define the interface between your code and the environment.

Implementations can add extras. Platform-specific features are totally fine. Want to use Linux-specific APIs like epoll? Go ahead. Want to access Windows registry? Cool. Want to use macOS frameworks? Do it. The spec doesn't forbid platform-specific stuff — it just doesn't require it. Portable code stays portable, but you can go native when needed.

Some examples of platform-specific features implementations might add:

Operating system APIs: Direct access to OS-specific functionality (Windows API, POSIX, etc.)

Hardware intrinsics: SIMD instructions, atomic operations specific to certain CPUs

Linking behavior: How the implementation handles dynamic libraries, symbol visibility, etc.

Optimization hints: Compiler-specific attributes or pragmas for fine-tuning performance

Debugging support: Platform-specific debugging info formats, profiling hooks, etc.

Freestanding or hosted, your call. Whether you're running on bare metal or in a full OS, both work. A freestanding implementation doesn't assume an OS exists. You're on your own for I/O, memory management, everything. Perfect for kernels, bootloaders, or embedded systems. The spec defines what's available in freestanding mode (basically: core language features, no I/O) versus hosted mode (full standard library).

The Boundary

Where does the language end and the environment begin? The spec is careful about this:

In the spec: Language syntax, semantics, type system, memory model, core library functions that work everywhere

Not in the spec: How compilation works, how object files are structured, how the OS schedules threads, how specific hardware instructions are chosen

This boundary means you can implement the language for wildly different environments — from embedded microcontrollers to supercomputers — without the spec getting in the way. The core language is consistent everywhere, but the environment adapts to where you're running.

This keeps the language usable in tons of different contexts while maintaining a solid, consistent core. Best of both worlds.

Copyright (c) 2025 Ocean Softworks, Sharkk