Building a Ktor Native Server Docker Image

written by Adrian Dieter
A large and a small Docker whale each carrying a container with the Ktor logo. The larger container on the large whale is labeled JVM. The smaller container on the small whale is labeled Native.

When we think of a server application running on the JVM compared to something like Go, we often assume that it takes forever to start and produces large artifacts since we always have to bundle the JVM. With this blog post, we want to answer the question: Can Kotlin/Native give us the benefits of other compiled languages without any drawbacks?

In this post we will answer:

  • How to set up a ktor kotlin multiplatform project that targets the JVM and Native platforms.
  • How we can use the native binary to build a smaller container image.
  • How the native server performs compared to its JVM counterpart.
  • What the current limitations of Ktor are on Kotlin/Native.

Project Setup

A minimal example project is available on GitHub. We will walk through the interesting parts of the project setup here.

Starting with a new Kotlin project, we can start by replacing the kotlin("jvm") plugin with multiplatform and adding other required plugins to build.gradle.kts.

plugins {
    application
    kotlin("multiplatform") version "2.0.0"
    kotlin("plugin.serialization") version "2.0.0"
    id("com.gradleup.shadow") version "8.3.0"
}

Defining build targets

In the kotlin block, we can now configure the native and JVM build targets.

The nativeTarget is selected based on the current OS and architecture. When building on the Linux/amd64 platform, the nativeTarget will be linuxX64. Then, the entryPoint for the application is configured. In this case the main function.

val hostOs = System.getProperty("os.name")
val arch = System.getProperty("os.arch")
val nativeTarget = when {
    hostOs == "Mac OS X" && arch == "x86_64" -> macosX64("platform")
    hostOs == "Mac OS X" && arch == "aarch64" -> macosArm64("platform")
    hostOs == "Linux" && (arch == "x86_64" || arch == "amd64") -> linuxX64("platform")
    hostOs == "Linux" && arch == "aarch64" -> linuxArm64("platform")
    // Other supported targets are listed here: https://ktor.io/docs/native-server.html#targets
    else -> throw GradleException("Host OS is not supported in Kotlin/Native. $hostOs/$arch")
}
nativeTarget.apply {
    binaries {
        executable {
            entryPoint = "main"
        }
    }
}

For the JVM target, we first configure the main class of the application. In this case MainKt, which is the result of compiling src/jvmMain/kotlin/Main.kt. The uber jar is then set up in the shadowJar task. It packages the output and all dependencies of the main task into a single jar.

jvm {
    compilations {
        application {
            mainClass.set("MainKt")
        }
        val main = getByName("main")
        tasks {
            shadowJar {
                from(main.output)
                configurations = listOf(main.compileDependencyFiles)
            }
        }
    }
}

Here, val main = getByName("main") resolves the main compilation task, which can then be used to access output and dependencies.

Source Sets

The Kotlin Gradle plugin automatically sets up source sets using the default hierarchy template. In our case, we get 6 source sets to configure and implement. nativeMain and jvmMain contain platform specific code for the JVM and Native targets. All code shared between these targets is in commonMain. For each Main source set, we also get a Test source set.

nativeMain
commonMain
jvmMain
commonTest
nativeTest
jvmTest

src
├── commonMain
│   ├── kotlin
│   └── resources
├── commonTest
│   └── kotlin
├── jvmMain
│   └── kotlin
├── jvmTest
│   └── kotlin
├── nativeMain
│   └── kotlin
└── nativeTest
    └── kotlin

For our Ktor server, this means that the majority of the implementation is in the commonMain source set. All modules and the installation of (most) plugins will live there. The jvm and native source sets are only used to define the main function and to configure the embeddedServer. This allows us to choose a different engine than CIO for the JVM target or to configure plugins which only support the JVM or we only need for the Native target.

In the sourceSet block, dependencies can be set for each source set. The empty blocks can be omitted, but are included for the sake of completeness.

sourceSets {
    commonMain.dependencies {
        implementation("io.ktor:ktor-server-core:$ktor_version")

        // ContentNegotiation (JSON)
        implementation("io.ktor:ktor-server-content-negotiation:$ktor_version")
        implementation("io.ktor:ktor-serialization-kotlinx-json:$ktor_version")

        // Resource based routing
        implementation("io.ktor:ktor-server-resources:$ktor_version")

        // HTML Templating
        implementation("io.ktor:ktor-server-html-builder:$ktor_version")
    }

    commonTest.dependencies {
        implementation(kotlin("test"))
        implementation("io.ktor:ktor-server-test-host:$ktor_version")
        implementation("io.ktor:ktor-client-content-negotiation:$ktor_version")
    }

    nativeMain.dependencies {
        implementation("io.ktor:ktor-server-cio:$ktor_version")
    }

    nativeTest.dependencies {}

    jvmMain.dependencies {
        // Could use an alternative server implementation for the JVM target here
        // implementation("io.ktor:ktor-server-jetty:$ktor_version")
        implementation("io.ktor:ktor-server-cio:$ktor_version")

        implementation("ch.qos.logback:logback-classic:$logback_version")
    }

    jvmTest.dependencies { }
}

Server Implementation

In our case, both nativeMain and jvmMain contain only a single, identical file: src/(jvm|kotlin)Main/kotlin/Main.kt

fun main() {
    embeddedServer(CIO, port = 8080, host = "0.0.0.0", module = Application::module)
        .start(wait = true)
}

It configures the embeddedServer with the CIO engine and registers our module from the commonMain source set.

The entrypoint to the ktor application is the fun Application.module() defined in src/commonMain/kotlin/module.kt. Here, we can register plugins, setup routing and load other modules, just like in a project that targets a single platform.

fun Application.module() {

    install(ContentNegotiation) {
        json()
    }

    routing {
        indexRoute()

        get("/health") {
            call.respond(HttpStatusCode.OK)
        }
    }
}

// routes.kt
fun Route.indexRoute() = get("/") {
    call.respondHtml {
        body {
            h1 { +"Hello from Ktor 👋" }
        }
    }
}

Platform Specific Code

We can define patform specific behaviour using expected and actual declerations.

For example, a new GET /platform endpoint that returns {"platform": "JVM"} or {"platform": "Native"} based on the platform the project was compiled to can be added by:

  1. Declaring function platform(): String, that returns the name of the current platform in commonMain using the expect keyword.
    expect fun platform(): String
    
  2. Implementing this function in jvmMain and nativeMain
    // jvmMain/kotlin/platform.jvm.kt
    actual fun platform(): String = "JVM"
    
    // nativeMain/kotlin/platform.native.kt
    actual fun platform(): String = "Native"
    
  3. Using platform() in a new platform route in commonMain
    @Serializable
    data class PlatformResponse(val platform: String)
    
    fun Route.platformRoute() = get("/platform") {
        call.respond(PlatformResponse(platform()))
    }
    

The new GET /platform endpoint now responds with {"platform":"Native"} or {"platform":"JVM"} depending on the build target.

Building an Image 🐳

Running the linkReleaseExecutablePlatform Gradle task produces a ktor-native-docker.kexe binary for the current platform. So, on Mac, the binary will be compiled for the macosX64 or macosArm64 target, depending on the architecture. If we want to target a Linux OS, this can be achieved by building inside a Linux container. To target x86 on an arm64 platform, Docker can emulate the desired platform using --platform linux/amd64.

NativeDebian.Dockerfile

FROM gradle:8.9.0-jdk21 AS build
COPY --chown=gradle:gradle . /app
WORKDIR /app
RUN gradle linkReleaseExecutablePlatform --no-daemon --stacktrace --info

FROM debian:12-slim
EXPOSE 8080:8080
RUN apk add --no-cache gcompat libstdc++
COPY --from=build /app/build/bin/platform/releaseExecutable/ktor-native-docker.kexe /app.kexe

ENTRYPOINT ["/app.kexe"]

And build using docker buildx build --platform linux/amd64 -f Native.Dockerfile -t ktor-native .

💡 Why not use an alpine base image?

Kotlin/Native binaries rely on glibc to be available on the system, as they are compiled with GCC. More details can be found in this issue.

Alpine is based on musl. Running glibc programs there requires the gcompat compatibility layer.

When testing the Native binary on Alpine, this came with huge performance penalties. Under high load the application would just stop responding and not recover, sitting at 100% CPU. On MacOSS the container would also only use 1 CPU regardless of availability.

Speeding up the build

Now, on every build, the compiler has to download large dependencies for Kotlin/Native. To prevent this, we will prepare a builder image once, that fetches and caches these dependencies.

NativeBuild.Dockerfile

FROM gradle:8.9.0-jdk21
COPY --chown=gradle:gradle *.gradle.kts gradle.properties /app/
COPY --chown=gradle:gradle src /app/src
WORKDIR /app
RUN gradle compileKotlinPlatform --stacktrace --info --build-cache

It is built using docker buildx build --platform linux/amd64 -f Native.Dockerfile -t ktor-native:build .. We can then replace FROM gradle:8.9.0-jdk21 AS build with FROM ktor-native:build AS build in our Native.Dockerfile.

The (large) ktor-native:build image caches the dependencies and only needs to be rebuilt when they change. There are other options for improving build speed, such as using a (remote) gradle build cache as described here.

With this, we can get the build time of our Native image to be comparable to the JVM image. 🙌

Results

Let’s compare our Native Linux/amd64 images with the JVM image.

ImageBuild DurationSizeStart Duration*
Native alpine1m 32s15.01 MB0.019s
Native debian1m 42s74.78 MB0.023s
Native ubuntu1m 40s82.74 MB0.022s
JVM apline1m 49s123.67 MB0.609s
builder2m 52s2.8 GB-

* As reported by Ktor

When using the builder image, we achieve a similar build speed for an image that is only 60% of the size and has a 32 times faster application start. 🎉 The Alpine image would be only 12% of the size of the JVM image, but it has performance issues as described above.

Performance 🚀

For performance testing, we added a new route to our test application that calculates the SHA3_256 hash of the request body.

fun Route.hash(limit: ) = post("/hash") {
    if ((call.request.contentLength() ?: 0) > limit) {
        call.respond(HttpStatusCode.PayloadTooLarge)
    } else {
        val bytes = call.receiveChannel().toByteArray(limit)
        val hash = SHA3_256().digest(bytes).toHexString()
        call.respond(hash)
    }
}

This version of the app is then deployed to AWS Fargate containers with 2 vCPU and 4GB RAM. Using k6 we can run simple loads against the /hash endpoint using simulated users.

Each user waits 500ms between requests and users are ramped up in the first 20 seconds, stay at the target for 30 seconds and ramp down for 10 seconds for a total test time of 1 minute.

With up to 10, 50 and 100 concurrent users we get the following results:

10 UsersNative on DebianJVM
avg19.37ms19.06ms
min12.89ms13.17ms
max85.97ms80.93ms
p(90)25.94ms25.22ms
p(95)30.54ms33.19ms
CPU/RAM max15% / 1.6%23% / 5.3%
requests878 (14.5/s)877 (14.5/s)
50 UsersNative on DebianJVM
avg95.65ms23.99ms
min13.59ms11.29ms
max391.24ms129.31ms
p(90)196.74ms41.33ms
p(95)221.17ms55.47ms
max CPU/RAM98% / 5%59% /9.5%
requests3800 (63.0/s)4319 (71.7)/s
100 UsersNative on DebianJVM
avg666.7ms33.13ms
min12.89ms10.85ms
max1.68s314.33ms
p(90)1.1s63.99ms
p(95)1.19s85.74ms
CPU/RAM max99% / 5.5%77% / 11%
requests3894 (64.7/s)8486 (140.6/s)

The Native and JVM platforms are very comparable at low load. Here the Native platform requires less resources than the JVM.

However, at higher loads, the Native platform reaches its maximum throughput much faster. With 50 users, it can barely handle a maximum of 63req/s, serving 90% of the requests slower than 196ms, while the JVM at a similar throughput has a p(90) response time of 41ms.

Even with 100 users, the JVM can keep up and handles 140req/s with acceptable response times of 85ms for the 95th percentile.

Long-running

Over a longer period of 30 minutes, with a single user waiting 500ms between requests we can see, that the JVM platform performs slightly better than the Native platform. On average there is a ~10ms performance difference, with both platforms requiring only a small fraction of the available resources.

10 UsersNative on DebianJVM
avg21.52ms15.61ms
min15.22ms12.44ms
max116.7ms76.05ms
p(90)27.15ms17.98ms
p(95)30.68ms19.78ms
CPU/RAM max5% / 1.8%.5% / 4%
requests3439 1.910128/s3481 1.933524/s

These performance tests are by no means exhaustive. We only tested a single scenario of calculating hashes for random data and only looked at the aggregated performance, not how it behaves over time. What we can tell from them is that there are scenarios, where both platforms can perform very similarly, and some, where the JVM performs significantly better.

Limitations of Ktor server using Kotlin/Native

The docs list four limitations:

In addition to these limitations, not all plugins and core features support a Native target.

At the top of most docs pages you will find an info box either stating Native server support: ✅ or Native server support: ✖️. Some pages, however, are missing this flag or general support is there, but specific features might not be available.

For example, the sessions page for example states Native server support: ✅, however signing and encrypting session data using the built-in SessionTransportTransformerMessageAuthentication or SessionTransportTransformerEncrypt only works on JVM.

On the serving static content page this information is missing and Routing.staticResources or Routing.staticFiles are not available on the Native target.

For templating, FreeMarker, Velocity, Mustache, Thymeleaf, Pebble and JTE do not work for the Native target. Only the HTML DSL is supported, which is great for smaller applications in combination with HTMX.

According to the authentication and authorization section, only Basic auth, Bearer auth, Form-based auth and OAuth are supported. Anyone who needs Digest auth, Session auth, LDAP or JWT must stick with the JVM.

When can kotlin/native be the right choice for a backend service?

So we showed how you can run a Kotlin/Native Ktor server in a Docker environment, but with all these limitations, does it even make sense to consider the Kotlin/Native platform for a Ktor server?

I would argue yes, if

  • Optimization for application startup time or image size is important.
  • You can live with the limitations of available features, plugins and libraries.
  • The server needs to run somewhere where a JVM cannot, and you really want to use Kotlin.

This was mainly intended as an experiment and just because it is possible, it does not mean you should move all Ktor services to the Kotlin/Native platform.

To quote the Kotlin docs:

Kotlin/Native is primarily designed to allow compilation for platforms on which virtual machines are not desirable or possible, such as embedded devices or iOS.

Why Kotlin/Native?

It could be interesting to test this in a serverless function setting, where cold start time is important, small images are beneficial and a single instance does not handle multiple requests concurrently.