ScalaLoci

A programming language for distributed applications

Features

Unified
Implement all components of a distributed application in a single language
Universal
Freely express any distributed architecture
Safe
Enjoy static type-safety across components and static checks for architectural constraints

Concepts

Specify Architecture

Define the architectural relation of the components of the distributed system

@peer type Server <: {
  type Tie <: Multiple[Client]
}

@peer type Client <: {
  type Tie <: Single[Server]
}

Specify Placement

Control where data is located and computations are executed

val items: Items on Server =
  getCurrentItems()

val ui: UI on Client =
  new UI

Compose

Combine data flow across components through reactive programming

val items: Var[Items] on Server =
  Var(getCurrentItems())

on[Client] {
  Signal {
    items.asLocal() map createUIEntry
  }
}

Step-by-Step Example

The example demonstrates the design of a chat application in five steps

  1. Declaring a Server
  2. Declaring a Client
  3. Declaring a message event on the Client and firing the event for every user input
  4. Declaring a publicMessage event on the Server aggregating all events from the clients
  5. Declaring a client-side observer for the server-side publicMessage event
@multitier object Chat {
  @peer type Server <: { type Tie <: Multiple[Client] }













}
A single server
Server declaration


  @peer type Client <: { type Tie <: Single[Server] }













Two clients
Client declaration




  val message = on[Client] { Evt[String]() }





  def main() = on[Client] {

    for (line <- io.Source.stdin.getLines)
      message.fire(line)
  }

A message event, placed on every client
Event message declaration






  val publicMessage = on[Server] {
    message.asLocalFromAllSeq map { case (_, message) => message }
  }







Messages are sent from the clients' message events to the publicMessage event on the server
Event publicMessage declaration











    publicMessage.asLocal observe println




Messages are forwarded from the publicMessage event on the server to all clients, which print them to their console
Remote observer for publicMessage declaration
Incremental Chat example

Getting Started

Try out ScalaLoci Examples from GitHub

Add ScalaLoci to Your Project

  1. Enable support for macro annotations in your build.sbt
    • for Scala 2.13
      scalacOptions += "-Ymacro-annotations"
    • for Scala 2.11 or 2.12 (Macro Paradise Plugin)
      addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.patch)
  2. Add the resolver for the ScalaLoci dependencies to your build.sbt
    resolvers += ("STG old bintray repo" at "http://www.st.informatik.tu-darmstadt.de/maven/").withAllowInsecureProtocol(true)
  3. Add the ScalaLoci dependencies that you need for your system to your build.sbt
    1. ScalaLoci language (always required):
      libraryDependencies += "de.tuda.stg" %% "scala-loci-lang" % "0.4.0"
    2. Transmitter for the types of values to be accessed remotely (built-in Scala types and standard collections are directly supported without additional dependencies)
      • REScala reactive events and signals
        libraryDependencies += "de.tuda.stg" %% "scala-loci-lang-transmitter-rescala" % "0.4.0"
    3. Network communicators to connect the different components of the distributed system
      • TCP [JVM only]
        libraryDependencies += "de.tuda.stg" %% "scala-loci-communicator-tcp" % "0.4.0"
      • WebSocket (using Akka HTTP on the JVM) [server: JVM only, client: JVM and JS web browser APIs]
        libraryDependencies += "de.tuda.stg" %% "scala-loci-communicator-ws-akka" % "0.4.0"
      • WebSocket (Play integration) [server: JVM only, client: JVM and JS web browser APIs]
        libraryDependencies += "de.tuda.stg" %% "scala-loci-communicator-ws-akka-play" % "0.4.0"
      • WebSocket (using Javalin on the JVM) [server: JVM only, client: JS web browser APIs]
        libraryDependencies += "de.tuda.stg" %% "scala-loci-communicator-ws-javalin" % "0.4.0"
      • WebRTC [JS web browser APIs only]
        libraryDependencies += "de.tuda.stg" %% "scala-loci-communicator-webrtc" % "0.4.0"
    4. Serializer for network communication
      • µPickle serialization
        libraryDependencies += "de.tuda.stg" %% "scala-loci-serializer-upickle" % "0.4.0"
      • Circe serialization
        libraryDependencies += "de.tuda.stg" %% "scala-loci-serializer-circe" % "0.4.0"

Showcases

Chat

The example logs messages that are sent and received by each participant as a composition of the data flow from the local UI and from remote chat partners. In the application, nodes are connected to multiple remote nodes and maintain a one-to-one chat with each. Users can select any chat to send messages. The messageSent event is defined as subjective value filtering the ui.​messageTyped messages from the UI for the currently active chat partner node. The messageLog signal contains the chat log for the chat between the local peer instance and the remote node given as parameter. It merges the remote stream for the chat messages from the remote instance node and the local stream subjective to the remote instance node via the || operator. The chat log is a signal created using list, which extends the list by an element for each new event occurrence in the merged stream. The chatLogs signal folds the remote[Node].​joined event stream, which is fired for each newly connected chat partner, into a signal that contains the chat logs for every chat partner generated by calling messageLog.

@multitier object Chat {
  @peer type Node <: { type Tie <: Multiple[Node] }

  type SingleChatLog = Signal[List[String]]
  type MultiChatLogs = Signal[List[SingleChatLog]]

  val ui: UI on Node = UI()

  val messageSent = on[Node] sbj { node: Remote[Node] =>
    ui.messageTyped filter { _ => ui.isSelectedChat(node) }
  }

  def messageLog(node: Remote[Node]): Local[SingleChatLog] on Node =
    ((messageSent from node).asLocal ||
     (messageSent to node)).list 

  val chatLogs: MultiChatLogs on Node =
    remote[Node].joined.fold(List.empty[SingleChatLog]) { (chats, node) =>
      messageLog(node) :: chats
    }
}

Tweets

The example shows how the operators in a processing pipeline can be placed on different peers to count the tweets that each author produces in a tweet stream. The application receives a stream of tweets on the Input peer, selects those containing the "multitier" string on the Filter peer, extracts the author for each tweet on the Mapper peer, and stores a signal with a map counting the tweets from each author on the Folder peer.

@multitier object TweetAuthoring {
  @peer type Input <: { type Tie <: Single[Filter] }
  @peer type Filter <: { type Tie <: Single[Mapper] with Single[Input] }
  @peer type Mapper <: { type Tie <: Single[Folder] with Single[Filter] } 
  @peer type Folder <: { type Tie <: Single[Mapper] }

  val tweetStream: Event[Tweet] on Input =
    retrieveTweetStream()

  val filtered: Event[Tweet] on Filter =
    tweetStream.asLocal filter { tweet => tweet.hasHashtag("multitier") }

  val mapped: Event[Author] on Mapper =
    filtered.asLocal map { tweet => tweet.author }

  val folded: Signal[Map[Author, Int]] on Folder =
    mapped.asLocal.fold(Map.empty[Author, Int].withDefaultValue(0)) {
      (map, author) => map + (author -> (map(author) + 1))
    }
}

Master–worker

The example shows a ScalaLoci implementation of the master–worker pattern where a master node dispatches tasks – double a number, for simplicity – to workers. The taskStream on the master carries the tasks as events. The assocs signal contains the assignments of workers to tasks. It folds over the taskStream||taskResult.​asLocalFromAllSeq event stream that fires for every new task (taskStream) and every completed task (taskResult.​asLocalFromAllSeq). The assignTasks method assigns a worker to the new task (taskAssocs), or enqueues the task if no worker is free (taskQueue) based on the folded event (taskChanged) and the currently connected worker instances (remote[Worker].​connected). The deployTask event subjectively provides every worker instance with the task it is assigned. Workers provide the result in the taskResult event stream which the master aggregates into the result signal. The signal is updated for every event to contain the sum of all values carried by the events.

@multitier object MasterWorker {
  @peer type Master <: { type Tie <: Multiple[Worker] }
  @peer type Worker <: { type Tie <: Single[Master] }

  class Task(v: Int) { def exec(): Int = 2 * v }

  // to add tasks: `taskStream.fire(Task(42))`
  val taskStream: Local[Event[Task]] on Master = Evt[Task]()

  val assocs: Local[Signal[Map[Remote[Worker], Task]]] on Master =
    (taskStream || taskResult.asLocalFromAllSeq).fold(
        Map.empty[Remote[Worker], Task],
        List.empty[Task]) { (taskAssocs, taskQueue, taskChanged) =>
      assignTasks(taskAssocs, taskQueue, taskChanged, remote[Worker].connected)
    }

  val deployTask: Signal[Task] per Worker on Master =
    worker: Remote[Worker] => Signal{ assocs().get(worker) }

  val taskResult: Event[Int] on Worker =
    deployTask.asLocal.changed collect { case Some(task) => task.exec() }

  val result: Signal[Int] on Master =
    taskResult.asLocalFromAllSeq.fold(0) { case (acc, (worker, result)) =>
      acc + result
    }
}

Token Ring

The example models a token ring, where every node in the ring can send a token for another node. Multiple tokens can circulate in the ring simultaneously until they reach their destination. Every node has exactly one predecessor and one successor. We define a Prev and a Next peer and specify that a Node itself is both a predecessor and a successor and has a single tie to its own predecessor and a single tie to its successor. Tokens are passed from predecessors to successors, hence nodes access the tokens sent from their predecessor. For this reason, values are placed on the Prev peer. Every node has a unique ID. The sendToken event sends a token along the ring to another peer instance. The recv event stream provides the data received by each peer instance. Each node fires recv when it receives a token addressed to itself, i.e., when the receiver equals the node ID and forwards other tokens. The expression sent.​asLocal\recv evaluates to an event stream of all events from sent.​asLocal for which recv does not fire. Merging such stream (of forwarded tokens) with the sendToken stream via the || operator injects both new and forwarded tokens into the ring.

@multitier object TokenRing {
  @peer type Prev <: { type Tie <: Single[Prev] }
  @peer type Next <: { type Tie <: Single[Next] }
  @peer type Node <: Prev with Next { type Tie <: Single[Prev] with Single[Next] }

  val id: Id on Prev = Id()

  val sendToken: Local[Event[(Id, Token)]] on Prev = Event[(Id, Token)]()

  val recv: Local[Event[Token]] on Prev =
    sent.asLocal collect {
      case (receiver, token) if receiver == id => token
    }

  val sent: Event[(Id, Token)] on Prev =
    (sent.asLocal \ recv) || sendToken
}

Publications

  1. Saverio Giallorenzo, Fabrizio Montesi, Marco Peressotti, David Richter, Guido Salvaneschi and Pascal Weisenburger. 2021. Multiparty Languages: The Choreographic and Multitier Cases. In Proceedings of the 35th European Conference on Object-Oriented Programming, ECOOP, Leibniz International Proceedings in Informatics (LIPIcs). July 11–17, 2021, Aarhus, Denmark. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl, Germany. http://doi.org/10.4230/LIPIcs.ECOOP.2021.22 [definitive version] [author version] [PDF slides] [video recording]
  2. Daniel Sokolowski, Jan-Patrick Lehr, Christian Bischof, Guido Salvaneschi. 2020. Leveraging Hybrid Cloud HPC with Multitier Reactive Programming. In Proceedings of the 3rd IEEE/ACM International Workshop on Interoperability of Supercomputing and Cloud Technologies, SuperCompCloud. November 11, 2020, Atlanta, GA, USA. IEEE, Piscataway, NY, USA, 27–32. http://doi.org/10.1109/SuperCompCloud51944.2020.00010 [definitive version] [author version]
  3. Pascal Weisenburger, Johannes Wirth, and Guido Salvaneschi. 2020. A Survey of Multitier Programming. ACM Computing Surveys, Volume 53, Issue 4, Article 81 (September 2020), 35 pages. http://doi.org/10.1145/3397495 [definitive version] [author version]
  4. Pascal Weisenburger and Guido Salvaneschi. 2020. Implementing a Language for Distributed Systems: Choices and Experiences with Type Level and Macro Programming in Scala. The Art, Science, and Engineering of Programming, Volume 4, Issue 3, Article 17 (February 2020), 29 pages. http://doi.org/10.22152/programming-journal.org/2020/4/17 [definitive version] [author version]
  5. Daniel Sokolowski, Philipp Martens, and Guido Salvaneschi. 2019. Multitier Reactive Programming in High Performance Computing. In Proceedings of the 6th ACM SIGPLAN International Workshop on Reactive and Event-Based Languages and Systems, REBLS. October 20–25, 2019, Athens, Greece. [author version]
  6. Pascal Weisenburger and Guido Salvaneschi. 2019. Multitier Modules. In Proceedings of the 33rd European Conference on Object-Oriented Programming, ECOOP, Leibniz International Proceedings in Informatics (LIPIcs). July 15–19, 2019, London, United Kingdom. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl, Germany. http://doi.org/10.4230/LIPIcs.ECOOP.2019.3 [definitive version] [author version] [PDF slides] [HTML slides] [video recording]
  7. Pascal Weisenburger and Guido Salvaneschi. 2019. Tutorial: Developing Distributed Systems with Multitier Programming. In Proceedings of the 13th ACM International Conference on Distributed and Event-based Systems, DEBS. June 24–28, 2019, Darmstadt, Germany. ACM, New York, NY, USA, 203–204. http://doi.org/10.1145/3328905.3332465 [definitive version] [author version]
  8. Pascal Weisenburger, Mirko Köhler, and Guido Salvaneschi. 2018. Distributed System Development with ScalaLoci. Proceedings of the ACM on Programming Languages 2, OOPSLA, Article 129 (October 2018), 30 pages. http://doi.org/10.1145/3276499 [definitive version] [author version] [PDF slides] [HTML slides] [video recording]
  9. Pascal Weisenburger, Tobias Reinhard, and Guido Salvaneschi. 2018. Static Latency Tracking with Placement Types. In Companion Proceedings for the ISSTA/ECOOP 2018 Workshops (FTfJP’18). July 16–21, 2018, Amsterdam, Netherlands. ACM, New York, NY, USA, 34–36. http://doi.org/10.1145/3236454.3236486 [definitive version] [author version]
  10. Pascal Weisenburger. 2016. Multitier Reactive Abstractions. In Companion Proceedings of the 2016 ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH Companion 2016). October 30 – November 4, 2016, Amsterdam, Netherlands. ACM, New York, NY, USA, 18–20. http://doi.org/10.1145/2984043.2984051 [definitive version] [author version]