Getting Started

Simple Project

The simplest Mill build for a Java project looks as follows:

build.sc
import mill._, scalalib._

object foo extends JavaModule {}

The simplest Mill build for a Scala project looks as follows:

build.sc
import mill._, scalalib._

object foo extends ScalaModule {
  def scalaVersion = "2.13.10"
}

Both of these would build a project laid out as follows:

build.sc
foo/
    src/
        FileA.java
        FileB.scala
    resources/
        ...
out/
    foo/
        ...

You can download an example project with this layout here:

The source code for this module would live in the foo/src/ folder, matching the name you assigned to the module. Output for this module (compiled files, resolved dependency lists, …) would live in out/foo/.

This can be run from the Bash shell via:

$ mill foo.compile                 # compile sources into classfiles

$ mill foo.run                     # run the main method, if any

$ mill foo.runBackground           # run the main method in the background

$ mill foo.launcher                # prepares a foo/launcher.dest/run you can run later

$ mill foo.jar                     # bundle the classfiles into a jar

$ mill foo.assembly                # bundle classfiles and all dependencies into a jar

$ mill -i foo.console              # start a Scala console within your project (in interactive mode: "-i")

$ mill -i foo.repl                 # start an Ammonite REPL within your project (in interactive mode: "-i")

You can run mill resolve __ to see a full list of the different tasks that are available, mill resolve foo._ to see the tasks within foo, mill inspect foo.compile to inspect a task’s doc-comment documentation or what it depends on, or mill show foo.scalaVersion to show the output of any task.

The most common tasks that Mill can run are cached targets, such as compile, and un-cached commands such as foo.run. Targets do not re-evaluate unless one of their inputs changes, whereas commands re-run every time.

Multi-module Projects

Java Example

build.sc
import mill._, scalalib._

object foo extends JavaModule

object bar extends JavaModule {
  def moduleDeps = Seq(foo)
}

Scala Example

build.sc
import mill._, scalalib._

object foo extends ScalaModule {
  def scalaVersion = "2.13.10"
}

object bar extends ScalaModule {
  def moduleDeps = Seq(foo)

  def scalaVersion = "2.13.10"
}

You can define multiple modules the same way you define a single module, using def moduleDeps to define the relationship between them. The above builds expect the following project layout:

build.sc
foo/
    src/
        Main.scala
    resources/
        ...
bar/
    src/
        Main2.scala
    resources/
        ...
out/
    foo/
        ...
    bar/
        ...

And can be built/run using:

$ mill foo.compile
$ mill bar.compile

$ mill foo.run
$ mill bar.run

$ mill foo.jar
$ mill bar.jar

$ mill foo.assembly
$ mill bar.assembly

Mill’s evaluator will ensure that the modules are compiled in the right order, and recompiled as necessary when source code in each module changes.

Modules can also be nested:

build.sc
import mill._, scalalib._

object foo extends ScalaModule {
  def scalaVersion = "2.13.10"

  object bar extends ScalaModule {
    def moduleDeps = Seq(foo)

    def scalaVersion = "2.13.10"
  }
}

Which would result in a similarly nested project layout:

build.sc
foo/
    src/
        Main.scala
    resources/
        ...
    bar/
        src/
            Main2.scala
        resources/
            ...
out/
    foo/
        ...
        bar/
            ...

Where the nested modules can be run via:

$ mill foo.compile
$ mill foo.bar.compile

$ mill foo.run
$ mill foo.bar.run

$ mill foo.jar
$ mill foo.bar.jar

$ mill foo.assembly
$ mill foo.bar.assembly

Watch and Re-evaluate

You can use the --watch flag to make Mill watch a task’s inputs, re-evaluating the task as necessary when the inputs change:

$ mill --watch foo.compile
$ mill --watch foo.run
$ mill -w foo.compile
$ mill -w foo.run

Mill’s --watch flag watches both the files you are building using Mill, as well as Mill’s own build.sc file and anything it imports, so any changes to your build.sc will automatically get picked up.

For long-running processes like web servers, you can use runBackground to make sure they recompile and restart when code changes, forcefully terminating the previous process even though it may be still alive:

$ mill -w foo.compile
$ mill -w foo.runBackground

Parallel Task Execution

By default, mill will evaluate all tasks in sequence. But mill also supports processing tasks in parallel. This feature is currently experimental and we encourage you to report any issues you find on our bug tracker.

To enable parallel task execution, use the --jobs (-j) option followed by a number of maximal parallel threads.

Example: Use up to 4 parallel threads to compile all modules:

mill -j 4 __.compile

To use as many threads as your machine has (logical) processor cores use --jobs 0. To disable parallel execution use --jobs 1. This is currently the default.

Please note that the maximal possible parallelism depends on your project. Tasks that depend on each other can’t be processed in parallel.

Command-line usage

Mill is a command-line tool and supports various options.

Run mill --help for a complete list of options

Output of mill --help
Mill Build Tool, version {mill-version}
usage: mill [options] [[target [target-options]] [+ [target ...]]]
  -D --define <k=v>    Define (or overwrite) a system property.
  -b --bell            Ring the bell once if the run completes successfully, twice if it fails.
  --bsp                Enable BSP server mode.
  --color <bool>       Enable or disable colored output; by default colors are enabled in both REPL
                       and scripts mode if the console is interactive, and disabled otherwise.
  -d --debug           Show debug output on STDOUT
  --disable-ticker     Disable ticker log (e.g. short-lived prints of stages and progress bars).
  -h --home <path>     (internal) The home directory of internally used Ammonite script engine;
                       where it looks for config and caches.
  --help               Print this help message and exit.
  -i --interactive     Run Mill in interactive mode, suitable for opening REPLs and taking user
                       input. This implies --no-server and no mill server will be used. Must be the
                       first argument.
  --import <str>       Additional ivy dependencies to load into mill, e.g. plugins.
  -j --jobs <int>      Allow processing N targets in parallel. Use 1 to disable parallel and 0 to
                       use as much threads as available processors.
  -k --keep-going      Continue build, even after build failures.
  --no-default-predef  Disable the default predef and run Mill with the minimal predef possible.
  --no-server          Run Mill in single-process mode. In this mode, no mill server will be started
                       or used. Must be the first argument.
  -p --predef <path>   Lets you load your predef from a custom location, rather than the "default
                       location in your Ammonite home
  --repl               Run Mill in interactive mode and start a build REPL. This implies --no-server
                       and no mill server will be used. Must be the first argument.
  -s --silent          Make ivy logs during script import resolution go silent instead of printing;
                       though failures will still throw exception.
  -v --version         Show mill version information and exit.
  -w --watch           Watch and re-run your scripts when they change.
  target <str>...      The name or a pattern of the target(s) you want to build, followed by any
                       parameters you wish to pass to those targets. To specify multiple target
                       names or patterns, use the `+` separator.

All options must be given before the first target.

A target is a fully qualified task or command optionally followed by target specific arguments. You can use wildcards and brace-expansion to select multiple targets at once or to shorten the path to deeply nested targets. If you provide optional target arguments and your wildcard or brace-expansion is resolved to multiple targets, the arguments will be applied to each of the targets.

Table 1. Wildcards and brace-expansion

Wildcard

Function

_

matches a single segment of the target path

__

matches arbitrary segments of the target path

{a,b}

is equal to specifying two targets a and b

You can use the ` symbol to add another target with optional arguments. If you need to feed a ` as argument to your target, you can mask it by preceding it with a backslash (\).

Examples

mill foo._.compile

Runs compile for all direct sub-modules of foo

mill foo.__.test

Runs test for all sub-modules of foo

mill {foo,bar}.__.testCached

Runs testCached for all sub-modules of foo and bar

mill __.compile + foo.__.test

Runs all compile targets and all tests under foo.

Built-in commands

Mill comes with a number of useful commands out of the box.

init

$ mill -i init com-lihaoyi/mill-scala-hello.g8
....
A minimal Scala project.

name [Scala Seed Project]: hello

Template applied in ./hello

The init command generates a project based on a Giter8 template. It prompts you to enter project name and creates a folder with that name. You can use it to quickly generate a starter project. There are lots of templates out there for many frameworks and tools!

resolve

$ mill resolve _
[1/1] resolve
clean
foo
inspect
par
path
plan
resolve
show
shutdown
version
visualize
visualizePlan

$ mill resolve _.compile
[1/1] resolve
foo.compile

$ mill resolve foo._
[1/1] resolve
foo.allSourceFiles
foo.allSources
foo.ammoniteReplClasspath
foo.ammoniteVersion
foo.artifactId
foo.artifactName
...

resolve lists the tasks that match a particular query, without running them. This is useful for "dry running" an mill command to see what would be run before you run them, or to explore what modules or tasks are available from the command line using resolve _, resolve foo._, etc.

mill resolve foo.{compile,run}
mill resolve "foo.{compile,run}"
mill resolve foo.compile foo.run
mill resolve _.compile          # list the compile tasks for every top-level module
mill resolve __.compile         # list the compile tasks for every module
mill resolve _                  # list every top level module and task
mill resolve foo._              # list every task directly within the foo module
mill resolve __                 # list every module and task recursively

inspect

$ mill inspect foo.run
[1/1] inspect
foo.run(JavaModule.scala:442)
    Runs this module's code in a subprocess and waits for it to finish

Inputs:
    foo.finalMainClass
    foo.runClasspath
    foo.forkArgs
    foo.forkEnv
    foo.forkWorkingDir

inspect is a more verbose version of resolve. In addition to printing out the name of one-or-more tasks, it also displays its source location and a list of input tasks. This is very useful for debugging and interactively exploring the structure of your build from the command line.

inspect also works with the same _/__ wildcard/query syntaxes that resolve do:

mill inspect foo.compile
mill inspect foo.{compile,run}
mill inspect "foo.{compile,run}"
mill inspect foo.compile foo.run
mill inspect _.compile
mill inspect __.compile
mill inspect _
mill inspect foo._
mill inspect __

show

$ mill show foo.scalaVersion
[1/1] show
"2.13.1"

By default, Mill does not print out the metadata from evaluating a task. Most people would not be interested in e.g. viewing the metadata related to incremental compilation: they just want to compile their code! However, if you want to inspect the build to debug problems, you can make Mill show you the metadata output for a task using the show command:

show is not just for showing configuration values. All tasks return values that can be shown with show. E.g. compile returns the paths to the classes folder and analysisFile file produced by the compilation:

$ mill show foo.compile
[1/1] show
[10/25] foo.resources
{
    "analysisFile": "/Users/lihaoyi/Dropbox/Github/test//out/foo/compile.dest/zinc",
    "classes": "ref:07960649:/Users/lihaoyi/Dropbox/Github/test//out/foo/compile.dest/classes"
}

show is generally useful as a debugging tool, to see what is going on in your build:

$ mill show foo.sources
[1/1] show
[1/1] foo.sources
[
    "ref:8befb7a8:/Users/lihaoyi/Dropbox/Github/test/foo/src"
]

$ mill show foo.compileClasspath
[1/1] show
[2/11] foo.resources
[
    "ref:c984eca8:/Users/lihaoyi/Dropbox/Github/test/foo/resources",
    ".../org/scala-lang/scala-library/2.13.1/scala-library-2.13.1.jar"
]

show is also useful for interacting with Mill from external tools, since the JSON it outputs is structured and easily parsed and manipulated.

When show is used with multiple targets, its output will slightly change to a JSON array, containing all the results of the given targets.

$ mill show "foo.{sources,compileClasspath}"
[1/1] show
[2/11] foo.resources
[
  [
    "ref:8befb7a8:/Users/lihaoyi/Dropbox/Github/test/foo/src"
  ],
  [
    "ref:c984eca8:/Users/lihaoyi/Dropbox/Github/test/foo/resources",
    ".../org/scala-lang/scala-library/2.13.1/scala-library-2.13.1.jar"
  ]
]

showNamed

Same as show, but the output will always be structured in a JSON dictionary, with the task names as key and the task results as JSON values.

$ mill showNamed "foo.{sources,compileClasspath}"
[1/1] show
[2/11] foo.resources
{
  "foo.sources":
  [
    "ref:8befb7a8:/Users/lihaoyi/Dropbox/Github/test/foo/src"
  ],
  "foo.compileClasspath":
  [
    "ref:c984eca8:/Users/lihaoyi/Dropbox/Github/test/foo/resources",
    ".../org/scala-lang/scala-library/2.13.1/scala-library-2.13.1.jar"
  ]
}

path

$ mill path foo.assembly foo.sources
[1/1] path
foo.sources
foo.allSources
foo.allSourceFiles
foo.compile
foo.localClasspath
foo.assembly

mill path prints out a dependency chain between the first task and the second. It is very useful for exploring the build graph and trying to figure out how data gets from one task to another. If there are multiple possible dependency chains, one of them is picked arbitrarily.

plan

$ mill plan foo.compileClasspath
[1/1] plan
foo.transitiveLocalClasspath
foo.resources
foo.unmanagedClasspath
foo.scalaVersion
foo.platformSuffix
foo.compileIvyDeps
foo.scalaOrganization
foo.scalaLibraryIvyDeps
foo.ivyDeps
foo.transitiveIvyDeps
foo.compileClasspath

mill plan foo shows which tasks would be evaluated, and in what order, if you ran mill foo, but without actually running them. This is a useful tool for debugging your build: e.g. if you suspect a task foo is running things that it shouldn’t be running, a quick mill plan will list out all the upstream tasks that foo needs to run, and you can then follow up with mill path on any individual upstream task to see exactly how foo depends on it.

visualize

$ mill show visualize foo._
[1/1] show
[3/3] visualize
[
    ".../out/visualize.dest/out.txt",
    ".../out/visualize.dest/out.dot",
    ".../out/visualize.dest/out.json",
    ".../out/visualize.dest/out.png",
    ".../out/visualize.dest/out.svg"
]

mill show visualize takes a subset of the Mill build graph (e.g. core._ is every task directly under the core module) and draws out their relationships in .svg and .png form for you to inspect. It also generates .txt, .dot and .json for easy processing by downstream tools.

The above command generates the following diagram:

VisualizeFoo.svg

visualizePlan

$ mill show visualizePlan foo.compile
[1/1] show
[3/3] visualizePlan
[
    ".../out/visualizePlan.dest/out.txt",
    ".../out/visualizePlan.dest/out.dot",
    ".../out/visualizePlan.dest/out.json",
    ".../out/visualizePlan.dest/out.png",
    ".../out/visualizePlan.dest/out.svg"
]

mill show visualizePlan is similar to mill show visualize except that it shows a graph of the entire build plan, including tasks not directly resolved by the query. Tasks directly resolved are shown with a solid border, and dependencies are shown with a dotted border.

The above command generates the following diagram:

VisualizePlan.svg

Another use case is to view the relationships between modules. For the following two modules:

build.sc
import mill._, scalalib._

object foo extends ScalaModule {
  def scalaVersion = "2.13.1"
}

object bar extends ScalaModule {
  def moduleDeps = Seq(foo)

  def scalaVersion = "2.13.1"
}

mill show visualizePlan _.compile diagrams the relationships between the compile tasks of each module, which illustrates which module depends on which other module’s compilation output:

VisualizeCompile.svg

clean

$ mill clean

clean deletes all the cached outputs of previously executed tasks. It can apply to the entire project, entire modules, or specific tasks.

mill clean                     # clean all outputs
mill clean foo                 # clean all outputs for module 'foo' (including nested modules)
mill clean foo.compile         # only clean outputs for task 'compile' in module 'foo'
mill clean foo.{compile,run}
mill clean "foo.{compile,run}"
mill clean foo.compile foo.run
mill clean _.compile
mill clean __.compile

Search for dependency updates

$ mill mill.scalalib.Dependency/showUpdates

Mill can search for updated versions of your project’s dependencies, if available from your project’s configured repositories. Note that it uses heuristics based on common versioning schemes, so it may not work as expected for dependencies with particularly weird version numbers.

Current limitations:

  • Only works for JavaModule modules (including ScalaModules, CrossScalaModules, etc.) and Maven repositories.

  • Always applies to all modules in the build.

  • Doesn’t apply to $ivy dependencies used in the build definition itself.

mill mill.scalalib.Dependency/showUpdates
mill mill.scalalib.Dependency/showUpdates --allowPreRelease true # also show pre-release versions

The Build REPL

$ mill --repl
Loading...
@ foo
res0: foo.type = ammonite.predef.build#foo:4
Commands:
    .ideaJavaModuleFacets(ideaConfigVersion: Int)()
    .ideaConfigFiles(ideaConfigVersion: Int)()
    .ivyDepsTree(inverse: Boolean, withCompile: Boolean, withRuntime: Boolean)()
    .runLocal(args: String*)()
    .run(args: String*)()
    .runBackground(args: String*)()
    .runMainBackground(mainClass: String, args: String*)()
    .runMainLocal(mainClass: String, args: String*)()
    .runMain(mainClass: String, args: String*)()
    .console()()
    .repl(replOptions: String*)()
Targets:
...

@ foo.compile
res1: mill.package.T[mill.scalalib.api.CompilationResult] = foo.compile(ScalaModule.scala:143)
    Compiles the current module to generate compiled classfiles/bytecode

Inputs:
    foo.upstreamCompileOutput
    foo.allSourceFiles
    foo.compileClasspath
...

@ foo.compile()
[25/25] foo.compile
res2: mill.scalalib.api.CompilationResult = CompilationResult(
  /Users/lihaoyi/Dropbox/Github/test/out/foo/compile.dest/zinc,
  PathRef(/Users/lihaoyi/Dropbox/Github/test/out/foo/compile.dest/classes, false, -61934706)
)

You can run mill --repl to open a build REPL; this is a Scala console with your build.sc loaded, which lets you run tasks interactively. The task-running syntax is slightly different from the command-line, but more in line with how you would depend on tasks from within your build file.

You can use this REPL to interactively explore your build to see what is available.

Deploying your code

The two most common things to do once your code is complete is to make an assembly (e.g. for deployment/installation) or publishing (e.g. to Maven Central). Mill comes with both capabilities built in.

Mill comes with the built-in with the ability to make assemblies. Given a simple Mill build:

build.sc
import mill._, scalalib._

object foo extends ScalaModule {
  def scalaVersion = "2.13.1"
}

You can make a self-contained assembly via:

$ mill foo.assembly

$ ls -lh out/foo/assembly.dest/out.jar
-rw-r--r--  1 lihaoyi  staff   5.0M Feb 17 11:14 out/foo/assembly.dest/out.jar

You can then move the out.jar file anywhere you would like, and run it standalone using java:

$ java -cp out/foo/assembly.dest/out.jar foo.Example
Hello World!

To publish to Maven Central, you need to make foo also extend Mill’s PublishModule trait:

build.sc
import mill._, scalalib._, publish._

object foo extends ScalaModule with PublishModule {
  def scalaVersion = "2.13.1"

  def publishVersion = "0.0.1"

  def pomSettings = PomSettings(
    description = "Hello",
    organization = "com.lihaoyi",
    url = "https://github.com/lihaoyi/example",
    licenses = Seq(License.MIT),
    versionControl = VersionControl.github("lihaoyi", "example"),
    developers = Seq(
      Developer("lihaoyi", "Li Haoyi", "https://github.com/lihaoyi")
    )
  )
}

You can change the name of the published artifact (artifactId in the Maven POM) by overriding artifactName in the module you want to publish.

You can download an example project with this layout here:

Which you can then publish using the mill foo.publish command, which takes your sonatype credentials ( e.g. lihaoyi:foobarbaz) and GPG password as inputs:

$ mill foo.publish
Missing arguments: (--sonatypeCreds: String, --release: Boolean)

Arguments provided did not match expected signature:

publish
  --sonatypeCreds   String (format: "username:password")
  --signed          Boolean (default true)
  --gpgArgs         Seq[String] (default Seq("--batch", "--yes", "-a", "-b"))
  --readTimeout     Int (default 60000)
  --release         Boolean (default true)
  --connectTimeout  Int (default 5000)
  --awaitTimeout    Int (default 120000)
  --stagingRelease  Boolean (default true)

You also need to specify release as true or false, depending on whether you just want to stage your module on oss.sonatype.org or you want Mill to complete the release process to Maven Central.

If you are publishing multiple artifacts, you can also use mill mill.scalalib.PublishModule/publishAll as described

Running Mill with custom JVM options

It’s possible to pass JVM options to the Mill launcher. To do this you need to create a .mill-jvm-opts file in your project’s root. This file should contain JVM options, one per line.

For example, if your build requires a lot of memory and bigger stack size, your .mill-jvm-opts could look like this:

-Xss10m
-Xmx10G

The file name for passing JVM options to the Mill launcher is configurable. If for some reason you don’t want to use .mill-jvm-opts file name, add MILL_JVM_OPTS_PATH environment variable with any other file name.


Come by our Gitter Channel if you want to ask questions or say hi!