Article series
This is the second of four articles describing different aspects of shipping Hilla apps to production:
- Part 1: Production Build
- Part 2: Docker Images
- Part 3: CI/CD
- Part 4: Serverless Deployment
Deployment as Docker container
Nowadays, applications are usually deployed based on containers. This applies equally to deployments in public clouds and on-premise cloud environments. A production build in the form of an executable JAR file or in the form of an executable Native Image is very well suited for such a deployment as a Docker container. The starting point in both cases is a corresponding Dockerfile
.
Docker Image for JAR file
If the production build is available as an executable JAR file, a suitable Docker image can be described with the following Dockerfile
. The Dockerfile
should be located in the base directory of the Hilla project and have the following content:
# Base image with Java runtime
FROM eclipse-temurin:21-jre
WORKDIR /app
# Copy production build JAR into Docker image
COPY target/*.jar app.jar
# Expose port of Hilla app
EXPOSE 8080
# Start Hilla app on container startup
ENTRYPOINT ["java", "-jar", "app.jar"]
The Dockerfile
contains the required Java runtime environment, the production build as a JAR file, which is copied into the Docker image, and the configuration for the port via which the application can receive requests. The ENTRYPOINT
points to the JAR file in the Docker image, which is executed when the container is started.
The Docker image can be created locally with Docker or Podman:
docker|podman build --tag my-app .
The command is executed in the base directory of the Hilla project. The .
points to the same directory and the Dockerfile
contained therein. The Docker image is then available and usable locally.
A Docker container with the production build of the Hilla app can be started based on the Docker image that was created:
docker|podman run -it --rm --publish 8080:8080 my-app
The production build of the Hilla app can be deployed to numerous cloud providers or in a corporate Kubernetes cluster using the created Dockerfile
or Docker image.
Optimizations
An executable JAR file is convenient for deployment, but also has certain disadvantages. Loading all the required classes from a nested JAR file can take some time when starting the application, especially with larger JAR files. This can be avoided by unpacking the JAR file. Loading classes from an unpacked JAR file is faster and is therefore recommended for deployments in production. The JAR file can be unpacked separately or as a build stage in the Dockerfile
:
# First stage
FROM eclipse-temurin:21-jre AS build
WORKDIR /build
# Copy production build JAR into Docker image
COPY target/*.jar app.jar
# Extract JAR
RUN java -Djarmode=tools -jar app.jar extract --destination extracted
# Second stage
FROM eclipse-temurin:21-jre
WORKDIR /app
# Copy extracted lib folder and app.jar from build stage
COPY --from=build /build/extracted/lib lib
COPY --from=build /build/extracted/app.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
As part of a test with a simple Hilla demo app, I was able to reduce the start time by an average of approx. 1 second using this optimization. Without this optimization, the average start time was approx. 4 seconds, with optimization approx. 3 seconds.
Unpacking the JAR file of the production build also helps to create an efficient docker image with different layers. One assumes that the various parts of the JAR file change at different times. While the application code is likely to change frequently, it can be assumed that individual dependencies do not change as often. Parts that do not change that often should be stored in the lower layers of a Docker image. Parts that change more frequently should be in the upper layers of a Docker image. This way, Docker only has to load the updated layers when loading the Docker image. The known procedure for unpacking the JAR file of the production build of the Hilla app can be extended with the additional --layers
option. The file BOOT-INF/layers.idx
is used for this purpose. This file was generated by Spring Boot during the creation of the production build:
- "dependencies":
- "BOOT-INF/lib/"
- "spring-boot-loader":
- "org/"
- "snapshot-dependencies":
- "application":
- "BOOT-INF/classes/"
- "BOOT-INF/classpath.idx"
- "BOOT-INF/layers.idx"
- "META-INF/"
The information contained is used to distribute the individual parts of the JAR file in a way that they can be combined into efficient layers in a suitable Dockerfile:
# First stage
FROM eclipse-temurin:21-jre AS build
WORKDIR /build
# Copy production build JAR into Docker image
COPY target/*.jar app.jar
# Extract JAR using additional --layers option
RUN java -Djarmode=tools -jar app.jar extract --layers --destination extracted
# Second stage
FROM eclipse-temurin:21-jre
WORKDIR /app
# Copy the extracted jar content from the build stage
# Every COPY step creates a new docker layer
# This allows docker to only pull the changes it really needs
COPY --from=build /build/extracted/dependencies/ ./
COPY --from=build /build/extracted/spring-boot-loader/ ./
COPY --from=build /build/extracted/snapshot-dependencies/ ./
COPY --from=build /build/extracted/application/ ./
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Another option for reducing the start time and memory consumption is the so-called Class Data Sharing (CDS). For this purpose, the Hilla app is executed once for training purposes after unpacking the JAR file. The result of this training run is the file application.jsa
. This file contains a dump of the internal representation of the classes loaded when the Hilla app was started. This file can be used like a cache for subsequent starts of the same Hilla app. This optimization can also take place separately or within the Dockerfile
:
# First stage
FROM eclipse-temurin:21-jre AS build
WORKDIR /build
# Copy production build JAR into Docker image
COPY target/*.jar app.jar
# Extract JAR
RUN java -Djarmode=tools -jar app.jar extract --destination extracted
WORKDIR /build/extracted
# Perform a training run and dump classes to app.jsa
RUN java -XX:ArchiveClassesAtExit=app.jsa -Dspring.context.exit=onRefresh -jar app.jar
# Second stage
FROM eclipse-temurin:21-jre
WORKDIR /app
# Copy extracted lib folder, app.jar and app.jsa from build stage
COPY --from=build /build/extracted/lib lib
COPY --from=build /build/extracted/app.jar app.jar
COPY --from=build /build/extracted/app.jsa app.jsa
EXPOSE 8080
# Use cache with extra parameter
ENTRYPOINT ["java", "-XX:SharedArchiveFile=app.jsa", "-jar", "app.jar"]
As part of a further test, I was able to reduce the start time by an additional approx. 1 second on average using this optimization with CDS. With the help of the optimizations shown, the average start time of a simple Hilla app was reduced from approx. 4 seconds to approx. 2 seconds.
Spring Boot supports CDS since version 3.3. CDS is an alternative approach of optimization if the use of Native Images is not possible or is not an option. Further information on CDS in conjunction with Spring Boot can be found in this recent video: Efficient Containers with Spring Boot 3, Java 21 and CDS (SpringOne 2024).
Docker image for Native Image
If the production build is available as a Native Image, a suitable Docker image can be described with the following Dockerfile
. In this case, the Dockerfile
should also be located in the base directory of the Hilla project and have the following content:
# Lightweight base image
FROM debian:bookworm-slim
WORKDIR /app
# Copy production build native image into Docker image
COPY target/my-app /app/my-app
# Expose port of Hilla app
EXPOSE 8080
# Start Hilla app on container startup
CMD ["/app/my-app"]
The Dockerfile
is based on a slimmed-down base image, in this case debian:bookworm-slim
. The production build of the Hilla app in the form of a Native Image is copied into the Docker image. The Dockerfile
also contains the configuration for the port via which the application can receive requests. The CMD
is used to specify that the Native Image should be executed in the Docker image when the container is started.
This Docker image can also be created locally with Docker or Podman:
docker|podman build --tag my-app .
The command is again executed in the base directory of the Hilla project. The .
points to the same directory and the Dockerfile
contained therein. The Docker image is then available locally and can be used.
Based on the Docker image created, a Docker container can be started with the production build of the Hilla app:
docker|podman run -it --rm --publish 8080:8080 my-app
Building the Hilla app as a Native Image can also be part of the Dockerfile
if required. For this purpose, the Dockerfile
is split into two stages:
# First stage: Base image with GraalVM and native image support
FROM ghcr.io/graalvm/native-image-community:21.0.2 AS build
WORKDIR /usr/src/app
# Copy project files
COPY . .
# Create native image
RUN ./mvnw clean package -Pproduction -Pnative native:compile
# Second stage: Lightweight base image
FROM debian:bookworm-slim
WORKDIR /app
# Copy the native image from the build stage
COPY --from=build /usr/src/app/target/my-app /app/my-app
EXPOSE 8080
CMD ["/app/my-app"]
The Native Image is created in the first stage of the Dockerfile
. An Image with support for GraalVM is used as the base image. The required files and dependencies are copied into the Docker image and then the production build of the Hilla app is built as a Native Image. In the second stage, the Docker image is created with the Native Image previously built.
Once again, the production build of the Hilla app can be deployed to many cloud providers or in a corporate Kubernetes cluster using the created Dockerfile
or Docker image.
Summary
The creation of suitable and efficient Docker images is an important foundation for the delivery of a production-ready Hilla app in a container environment. If the production build of the Hilla app has been packaged as an executable JAR file, the start time in the Docker container can be optimized through systematic unpacking and CDS. Relevant tasks can be mapped as build steps in the Dockerfile
.
Part 3 of the article series describes how the creation of the production build and the creation and publishing of the appropriate Docker image can be automated using a CI/CD pipeline.