mirai mirai logo

CRAN status R-multiverse status R-universe status R-CMD-check codecov DOI


( 未来 )

Minimalist Async Evaluation Framework for R

High-performance parallel code execution and distributed computing.

Designed for simplicity, a ‘mirai’ evaluates an R expression asynchronously, on local or network resources, resolving automatically upon completion.

Modern networking and concurrency built on nanonext and NNG (Nanomsg Next Gen) ensures reliable and efficient scheduling, over fast inter-process communications or TCP/IP secured by TLS.

Scale Up in Production

mirai パッケージを試してみたところ、かなり速くて驚きました

Joe Cheng on mirai with Shiny   Will Landau on mirai in clinical trials

Quick Start

Use mirai() to evaluate an expression asynchronously in a separate, clean R process.

A ‘mirai’ object is returned immediately.


input <- list(x = 2, y = 5, z = double(1e8))

m <- mirai(
    res <- rnorm(1e6, mean = mean, sd = sd)
    max(res) - min(res)
  mean = input$x,
  sd = input$y

Above, all name = value pairs are passed through to the mirai via the ... argument.

Whilst the async operation is ongoing, attempting to access the data yields an ‘unresolved’ logical NA.

#> < mirai [] >
#> 'unresolved' logi NA

To check whether a mirai has resolved:

#> [1] TRUE

To wait for and collect the evaluated result, use the mirai’s [] method:

#> [1] 48.09123

It is not necessary to wait, as the mirai resolves automatically whenever the async operation completes, the evaluated result then available at $data.

#> < mirai [$data] >
#> [1] 48.09123


Daemons are persistent background processes created to receive ‘mirai’ requests.

They may be deployed for:

Local parallel processing; or

Remote network distributed computing.

Launchers allow daemons to be started both on the local machine and across the network via SSH etc.

Secure TLS connections can be automatically-configured on-the-fly for remote daemon connections.

The mirai vignette may be accessed within R by:

vignette("mirai", package = "mirai")


The following core integrations are documented, with usage examples in the linked vignettes:

R parallel   Provides an alternative communications backend for R, implementing a low-level feature request by R-Core at R Project Sprint 2023. ‘miraiCluster’ may also be used with foreach, which is supported via doParallel.

promises   Implements the next generation of completely event-driven, non-polling promises. ‘mirai’ may be used interchageably with ‘promises’, including with the promise pipe %...>%.

Shiny   Asynchronous parallel / distributed backend, supporting the next level of responsiveness and scalability for Shiny. Launches ExtendedTasks, or plugs directly into the reactive framework for advanced uses.

Plumber   Asynchronous parallel / distributed backend, capable of scaling Plumber applications in production usage.

Arrow   Allows queries using the Apache Arrow format to be handled seamlessly over ADBC database connections hosted in daemon processes.

torch   Allows Torch tensors and complex objects such as models and optimizers to be used seamlessly across parallel processes.

Powering Crew and Targets High Performance Computing

targets   Targets, a Make-like pipeline tool for statistics and data science, has integrated and adopted crew as its default high-performance computing backend.

crew   Crew is a distributed worker-launcher extending mirai to different distributed computing platforms, from traditional clusters to cloud services.

crew.cluster   crew.cluster enables mirai-based workflows on traditional high-performance computing clusters using LFS, PBS/TORQUE, SGE and Slurm.

crew.aws.batch   crew.aws.batch extends mirai to cloud computing using AWS Batch.


We would like to thank in particular:

Will Landau for being instrumental in shaping development of the package, from initiating the original request for persistent daemons, through to orchestrating robustness testing for the high performance computing requirements of crew and targets.

Joe Cheng for optimising the promises method to make mirai work seamlessly within Shiny, and prototyping non-polling promises, which is implemented across nanonext and mirai.

Luke Tierney of R Core, for discussion on L’Ecuyer-CMRG streams to ensure statistical independence in parallel processing, and making it possible for mirai to be the first ‘alternative communications backend for R’.

Henrik Bengtsson for valuable insights leading to the interface accepting broader usage patterns.

Daniel Falbel for discussion around an efficient solution to serialization and transmission of torch tensors.

Kirill Müller for discussion on using ‘daemons’ to host Arrow database connections.

R Consortium  for funding work on the TLS implementation in nanonext, used to provide secure connections in mirai.


Install the latest release from CRAN or R-multiverse:


The current development version is available from R-universe:

install.packages("mirai", repos = "https://shikokuchuo.r-universe.dev")

◈ mirai R package: https://shikokuchuo.net/mirai/
◈ nanonext R package: https://shikokuchuo.net/nanonext/

mirai is listed in CRAN High Performance Computing Task View:

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.