Back to top

Langium Integration

To extend the Model Hub with your specific language, you need to provide a ModelServiceContribution. A model service contribution can provide a dedicated Model Service API and extend the persistence and validation capabilities of the Model Hub.

One of the core principles in our Model Hub architecture is re-use and we therefore aim to re-use as much of the language infrastructure and support that Langium generates for us. This is reflected in several design decisions:

  • Since the language server that contains the modules with all the languages already starts in a dedicated process, we will start our own Model Hub server in the same process to ease access.

  • We are re-using the dependency injection framework from Langium to bind our own Model Hub-specific services, such as the core model hub implementation, the model manager that holds the model storage, the overall command stack and the model subscriptions or the validation service that ensures that all custom validations are run on each model.

The bridge that connects the Model Hub world with the Langium world is our generic EMF Cloud Model Hub Langium integration library. That library has two main components:

  1. The Abstract Syntax Tree (AST) server that serves as a facade to access and update semantic models from the Langium language server as a non-LSP client. It provides a simple open-request-update-save/close lifecycle for documents and their semantic model.

  2. A converter between the Langium-based AST model and the client language model.

The biggest difference between those two models is that the client language model needs to be a serializable JSON model as we update the model using JSON patches and intend to send the model to clients which might run in a differenct process. Please note that this JSON model is indepdendent from the serialization format that you define in your grammar from which the AST is derived. This core work of the conversion is the resolution of cycles and the proper representation of cross references so that when we get a language model back from the client we can restore a full Langium-based AST model again, i.e., to have a full bi-directional transformation when it comes to the semantic model.

Using the generic AST server and the AST-language model converter we can easily implement a language-specific, typed ModelServiceContribution that the Model Hub can pick up and use as all we need to do is to connect our Langium services with the respective Model Hub services. Any additional functionality that we want to expose for our language can be exported as Model Service API in the contribution and re-used in the model hub or even a dedicated server.

Model Persistence

A model persistence contribution provides language-specific methods to load and store models from and to a persistent storage. Based on the services generated by Langium, we can query the document storage from Langium and using the generic converter ensure that we return a serializable model for the Model Hub. Similarly, we can re-use the generated Langium infrastructure to store the model by converting the language model back to an AST model. Furthermore, we need to ensure that anytime a model is updated on the Langium side we properly update the model on the Model Hub side. We achieve that by installing a listener on the Langium side and using the Model Manager from the Model Hub to execute a PATCH command that updates the model in the Model Hub.

For the Coffee Model, the persistence contribution may look something like this:


class CoffeePersistence implements ModelPersistenceContribution<string, CoffeeModelRoot> {
  modelHub: ModelHub;
  modelManager: ModelManager<string>;

  constructor(private modelServer: CoffeeModelServer) {
  }

  async canHandle(modelId: string): Promise<boolean> {
    return modelId.endsWith('.coffee');
  }

  async loadModel(modelId: string): Promise<CoffeeModelRoot> {
    const model = await this.modelServer.getModel(modelId);
    if (model === undefined) {
      throw new Error('Failed to load model: ' + modelId);
    }

    this.modelServer.onUpdate(modelId, async newModel => {
      try {
        // update model hub model
        const currentModel = await this.modelHub.getModel(modelId);
        const diff = compare(currentModel, newModel);
        if (diff.length === 0) {
          return;
        }
        const commandStack = this.modelManager.getCommandStack(modelId);
        const updateCommand = new PatchCommand('Update Derived Values', currentModel, diff);
        commandStack.execute(updateCommand);
      } catch (error) {
        console.error('Failed to synchronize model from CoffeeLanguageService', error);
      }
    });
    return model;
  }

  async saveModel(modelId: string, model: CoffeeModelRoot): Promise<boolean> {
    try {
      await this.modelServer.save(modelId, model);
    } catch (error) {
      console.error('Failed to save model' + modelId, error);
      return false;
    }
    return true;
  }  
}

Model Validation

A model validation contribution can provide a set of validators that work on the semantic model of the Model Hub. As a result, a validator can return a hierarchical diagnotic object that captures the infos, warnings, and errors of a particular part in the model. Using the generic transformations between the Langium and Model Hub space, the main work in this contribution is the translation from Langium’s DiagnosticInfo to the Model Hub’s more generic Diagnostic. Providing this translation as part of the generic EMF Cloud Model Hub Langium integration library is on the roadmap but can be also extracted from any public example.