GAMS Engine comes with a highly flexible model- and user management system that allows you to restrict the activities of your users according to your organizational hierarchy. To understand this in its entirety, we need to address a few concepts in more detail: models, namespaces, permissions and user types.


To illustrate some of the aspects, the Engine UI is used here. However, each of the steps shown below can also be achieved with the help of custom clients.


When submitting a GAMS job, it usually consists of model files and data files. While the latter usually differ with each job, the actual GAMS model remains mostly unchanged. In order to avoid submitting the same files to Engine with each job, models can be registered with Engine. Files of registered models are stored by Engine and used for subsequent jobs. You only need to provide the name under which the model is registered and send the dynamic data.

  • Job with unregistered model
    The user provides both the model files as well as the data to run the model with.

Unregistered model

  • Job with registered model
    The GAMS model is registered with Engine and the user only provides the model name and dynamic data to run the model with.

Registered model

If a job submit includes files that are already stored as part of the registered model at Engine, the files submitted by the user will be used.

Namespaces and Permissions

Models are organized in namespaces, which you can think of as file directories. Models are always registered in a namespace and submitted GAMS jobs are also executed there. Each model name can only occur once within a namespace. If two models with the same name are to be registered, this must be done in different namespaces. Apart from the ability to manage multiple models with the same name, namespaces are particularly useful for representing hierarchical (user) structures. This is due to an important property of namespaces, namely the concept of permissions.

Namespaces are similar to directories in the UNIX file system: you can specify which users have access to which namespaces. Just like the UNIX file system, GAMS Engine has three types of permissions on namespaces:
  • Read Permission
    Users can download GAMS models registered in this namespace
  • Write Permission
    Users can register new models in this namespace
  • Execute Permission
    Users can execute models in this namespace
The table below shows the mapping of these permissions to the permission codes used by GAMS Engine.

Permission Code Read Permission Write Permission Execute Permission
0 - - -
1 - -
2 - -
3 -
4 - -
5 -
6 -

In the following example, two namespaces are used by three people with different permissions to represent the structure and workflow of a company with a customer.

  • Developer:
    The Model Developer is responsible for maintaining existing models and developing future models. He only accesses the namespace R&D provided for this purpose, in which he has all permissions.
  • Manager:
    The manager has read and execute permissions on the namespace R&D. There she checks the models for customer suitability. If a model (update) is ready, then the manager registers it in the namespace production.
  • Customer:
    The customer has read and execute permissions on the namespace production. With these permissions she can submit jobs with her data and receive the results.

Permission hierarchy

By default, the result files to which the customer has access to also contain the model files. In order to restrict a user from accessing your model files, you have to specify a proper INEX file in addition to revoking read permission from this user!


After setting up GAMS Engine, there is a single namespace available with the name: global. Adding namespaces as well as registering models can easily be done via the Engine UI in the Models view.

Add namespace and register model

User Management

After first installing the system, there is a single default user available with the name: admin and password: admin.


We strongly recommend that you change the default password for the admin user immediately after installing Engine to prevent unauthorized users from accessing your system!

If you are administrator, you can invite other users. If you are using the Engine UI, you will find an invite user button in the upper right corner of the Users section.

Invite a new user

GAMS Engine distinguishes three types of users:

  • Users
  • Inviters
    Inviters are users with the additional privilege to invite new users to use GAMS Engine. However, inviters can only invite users who have the same or fewer permissions than themselves. For example, if an inviter has read and execute permissions on the namespace global, she is not allowed to invite a user with write permission on this namespace. Inviters can invite other inviters, but not admins.
  • Administrators
    Administrators are the most privileged users. They have full permissions for each namespace and are allowed to invite new users, even new administrators. Administrators can also add new namespaces or remove existing ones.

In order to add a new user to GAMS Engine, you need to generate an invitation code. Users can then register themselves by providing this invitation code, a username and a password. Invitation codes can be generated by administrators and inviters. When creating an invitation code, permissions to namespaces can be assigned so that the new user can start interacting with the system directly. Note that the invitee's permissions may be lower than the inviter's, but not higher. Furthermore, inviters are able to manage - i.e. modify permissions and delete - their children (invitees) as well as any grandchildren. In this way, you can set up several hierarchy levels.

User hierarchy example:

User hierarchy

Users only have access to jobs that they have submitted themselves. Inviters can see and interact with their own jobs as well as those of all users invited by them directly or indirectly. Administrators have access to the jobs of all users.

User groups

Users can be assigned to user groups. A user group is a collection of users with any role (user, iviter, or administrator). A user group is always assigned to a namespace and has a unique label/name within that namespace. To create or remove a user group, you must be an inviter with write permission in that namespace (or an administrator).

Any member of a group can see all other members within that group. In addition, inviters can see the members of all their invitees' groups. Only inviters and admins can add and remove members from a group. While admins can remove any member from any group, inviters can only remove their invitees. Also, inviters can only remove members from groups that they are either a member of themselves or that belong to an invitee.

User groups example

User groups can be used to restrict the visibility of models in a namespace. When registering a new model (or updating an existing model), you can specify which user groups can see that model. All users who can see at least one of the user groups to which the model is assigned can see the model. If a model is not assigned to a user group, the model is visible to anyone who has any permission (read, write, and/or execute) in that namespace.

Example to use user groups to restrict visibility of models

Jobs can also be shared with user groups. When you submit a new (Hypercube) job, you can specify which user groups have access to the job. Anyone who has access to the user group can then access the job (view the job details, cancel the job, download/delete the results, etc.). The access groups of a job can be edited even after the job has been submitted (via the PUT /jobs/{token}/access-groups endpoint).

Example for job access groups

When you share a job, any member of the group you shared the job with can download the job results. This can include model files of models that the user accessing the job cannot see. Use INEX files to restrict which files are included in the results archive.


To interact with GAMS Engine, you must authenticate yourself. Engine currently supports two ways to authenticate: Basic authentication as well as authentication using JSON Web Tokens (JWT).

Whereas with basic authentication you send your username and password with each request, with JWT a token is used for authentication. JWT are only valid for a certain amount of time that you can specify when generating a new token (via the POST /auth/login endpoint).

Whenever your username or password is changed (either by yourself or by an administrator), all your JWT are automatically invalidated and you have to generate new ones. You can also manually invalidate all your JWT via the POST /auth/logout endpoint. It is not possible to invalidate individual tokens.

One advantage of JWT authentication over basic authentication is that instead of storing a user's password (which is often reused for multiple services), a random-looking token is stored if you want to implement a "stay logged in" feature on your client (in fact, this is what Engine UI, GAMS Studio and GAMS MIRO do). In addition, you can restrict access of a certain token by making use of "access scopes":

In GAMS Engine, you can specify what you can do with a particular token: You can make a token accessible only to GET endpoints with the READONLY access scope, or you can specify which API endpoints (for example, JOBS or NAMESPACES) can be accessed with that particular token. This allows you to ensure, for example, that a particular CI job can only add/update/delete models but cannot submit new jobs, etc.


GAMS Engine allows you to limit the solve time as well as the disk usage of individual users or groups of users.

Let's take the user hierarchy from the previous section on user management to give you an example of how quotas work:

User hierarchy example

Only admins and inviters can update quotas of users. While admins can update the quotas of all users, inviters can only update the quotas of direct/indirect invitees (we also call this the "subtree" of an inviter). An inviter cannot assign a quota larger than his own

There are three types of quotas that can be assigned: volume_quota, disk_quota and parallel_quota (only used in the Engine Kubernetes version).

If "Admin" sets the volume_quota of "Student 1" to 60 seconds, the student can only run jobs for one minute. If the quota is exceeded, new jobs submitted by "Student 1" are rejected by Engine. In addition, currently active jobs are canceled (using SIGKILL) when the volume quota of the user who submitted the job is exhausted.

If "Admin" sets the volume_quota of "Teacher" to 1800 seconds, it means that the "Teacher", "Student 1", "Student 2", "Assistant" and anyone invited by the "Assistant" can run GAMS for a total of 1800 seconds. This allows administrators to invite new "entities" by creating a new inviter that has quotas assigned to it. Both the inviter and anyone invited by that inviter are now limited by those quotas.

Quotas allow further restrictions. Imagine the case where "Admin" has assigned 1800 seconds of volume_quota to "Teacher". As an inviter, "Teacher" can give students any volume_quota that is less than or equal to 1800 seconds. "Teacher" could give "Student 1" and "Student 2" each 600 seconds and "Assistant" 800 seconds. In this case, "Student 1" can use up to 600 seconds, "Student 2" can use up to 600 seconds, and "Assistant" can use up to 800 seconds.

Quotas allow for overbooking. An attentive reader may have noticed that in the last example, the teacher had 1800 seconds, but she provided a total of 2000 seconds for her invitees. This does not mean that the teacher's subtree had a total of more than 1800 seconds. When "Student 1" and "Student 2" have used up their entire allotment, the "Assistant" still has 600 seconds left. Therefore, he can no longer use up the full 800 seconds allocated to him by the teacher.

disk_quota is used to limit the disk usage of a user as well as the user's invitees (if any). The following files contribute to disk usage:

  • Temporary/unregistered models
  • Additional data provided with the (Hypercube) job
  • Inex files provided with the (Hypercube) job
  • (Hypercube) job results

If the disk_quota is exceeded, GAMS Engine rejects all new jobs submitted by users who have exceeded their quotas. When a worker receives a job from a user whose disk_quota is exceeded, it is cancelled.

The disk space used during solving does not contribute to the disk_quota. A job could use more disk space during solving, but only upload a small portion (perhaps a result GDX file) and would not violate the quota.

Unlike volume_quota, disk_quota has a grace space. When a job is solved, it might be undesirable to refuse to upload the result because the disk_quota is exceeded. In this scenario, the result will be uploaded if it does not exceed disk_quota + 1GB. If the grace space is also exceeded, the job is marked as "Corrupted" and the results are not available.

disk_quota allows you to limit unregistered files. To limit registered files, such as registered models, inex files, etc. admins can assign a disk_quota to namespaces (available via the /namespaces/{namespace}/disk-quota API).

Similar to the volume_quota, the disk_quota allows further restrictions for invited users as well as overbooking.

Resource requests Engine K

While jobs in Engine One share available resources, in Engine K you must assign resource requests to a job. Engine then takes care of scheduling these heterogeneous jobs on the available hardware. Assigning resource requests is done via the labels field in the POST /jobs/ / POST /hypercube/ endpoints.

The labels field is an array of key=value pairs. The following table lists the available keys for assigning resource requests:

Key Required Example Description
cpu_request cpu_request=8 CPU units (vCPU/Core, Hyperthread) to be reserved for the job
memory_request memory_request=1000 Memory to be reserved for the job, in MiB
workspace_request workspace_request=2000 Workspace (hard disk) space to be reserved for the job, in MiB
tolerations - tolerations=key1=value1,tolerations=key2=value2 Array of node taints this job should tolerate (see the Kubernetes documentation for more details)
node_selectors - node_selectors=key1=value1,node_selectors=key2=value2 Array of node labels this job should be scheduled on. The job can be scheduled only on a node that has all the specified labels (see the Kubernetes documentation for more details)

Assigning resource requests directly to a job is very powerful and flexible. However, you may want to limit which resources your users can use. Also, users usually don't need this level of flexibility. This is where the concept of instances comes into play, which will be described in the next chapter.

Instances Engine K

Instances are a way to bundle resource requests under a single label. For example, we could define a new instance with 4 vCPU, 4000 MiB of memory and a workspace size of 10000 MiB and assign it the label small. Instead of assigning the resource requests directly to the job, we can assign the instance label instead (e.g. "small"). Internally, this label is then replaced by the corresponding resource requirements. We do this using the key: instance; in our example: instance=small. If you use the instance label, you cannot use one of the labels to specify resources directly at the same time, and vice versa.

Instances can be assigned to users via the PUT /usage/instances/{username} endpoint. When a user is assigned instances, they can no longer use the raw resource requests (cpu_request, memory_request etc.). Instead, they must select from one of the instances assigned to them. When instances are assigned to a user, a default instance must also be selected. If the user does not specify an instance for a job, this default instance is used. Users can update their own default instance via the PUT /usage/instances/{username}/default endpoint.

If a user has no instances assigned, she inherits the instances from her inviter. If the inviter itself has no instances assigned or inherits them from its inviter, the user can use raw resource requests or select an arbitrary instance. The list of all available instances can be queried via the GET /usage/instances endpoint. Users who do not have instances assigned can still select a default instance for convenience. Admins cannot be assigned instances.

When you invite new users, you cannot assign instances to which you do not have access yourself. Thus, the invitees can only be further restricted in terms of permissible instances.

Below is an example of how instances are inherited. Boxes with a dashed gray border indicate that instances are not directly assigned to the user, but he inherits them from his inviter. Boxes with a solid black border indicate that instances are explicitly assigned to the user.

Example of an instance hierarchy


Over time, the amount of disk space required on the server can grow accordingly. It is the responsibility of the admin to ensure that the result data is deleted when required. The Engine API provides a cleanup endpoint for this purpose.

When working with the Engine UI, you can reduce the amount of disk space used by GAMS Engine using the Cleanup view. You can either remove result files of individual jobs one by one or clean up multiple files at once using the Run housekeeping dialog. This allows you to remove all files created more than x days ago and/or files created by users who have been removed from the system.

Run housekeeping
Tip for Administrators:

If you notice a significant difference between the disk usage report and the actual disk usage, you can manually examine the files. GAMS Engine has its own container for housekeeping purposes. In case of network interruptions or other unusual events, automatic housekeeping may be disrupted. In this case, you can always force the process manually.

Assuming you have access to the cleaner container, you can get a status report using the commands: docker exec cleaner_container_name kill -10 1 and docker logs cleaner_container_name.

You can force the cleaner to manually check and delete the files that should be deleted. To do this, run: docker exec cleaner_container_name kill -12 1


For production servers, we recommend tracking logs of GAMS Engine to ensure system stability. You can use solutions such as EFK or ELK stack. Regardless of which solution you use, you need to familiarize yourself with GAMS Engine's core parts and their logging formats.

In GAMS Engine, we chose RabbitMQ as the message broker solution. When the cluster is first spun up, RabbitMQ creates the necessary queues, exchanges, and users. This part of the process creates non-standard protocols. However, it is usually okay to skip this part as it is only done once. After that, the logging format is standard. An example of a RabbitMQ log entry: 2021-10-20 11:46:16.006 [info] <0.941.0> accepting AMQP connection <0.941.0> ( ->

We chose PostgreSQL to store most of the structured data. Similar to RabbitMQ, PostgreSQL also creates the required tables and users on the first run, so it creates non-default logs. Skipping these logs should not be a problem either, as they are only issued once. An example of a PostgreSQL log entry: 2021-10-20 11:46:02.076 UTC [1] LOG: database system is ready to accept connection

MongoDB stores some of the structured data and all of the unstructured data. Its bootstrap process is also similar to RabbitMQ and PostgreSQL. A sample log of MongoDB looks like this: 2021-10-20T11:46:07.054+0000 I NETWORK [initandlisten] waiting for connections on port 27017

The logging format of the Worker,Cleaner,Hypercube Unpacker,Hypercube Appender, Dependency Checker,Job Watcher,Job Spawner, Job Cleaner,Job Canceler containers is standardized. An example log a worker is [20/10/2021 11:46:16] INFO #: Waiting for job.. The hash sign (#) after the severity (INFO) indicates which job is associated with this log. In this case, no job is associated with this log. Another log of a worker: [20/10/2021 12:11:54] INFO 1: Job Received indicates that the worker got the job with id 1. For Hypercubes, the log format is different. For example [20/10/2021 12:13:18] DEBUG 4@617007d8917cb774979dab76: Working directory cleaned this log refers to the the 4th job of the Hypercube job 617007d8917cb774979dab76. This log format facilitates the retrieval of logs that refer to a single job from multiple containers.

REST API backend log format is different from the other log formats. We should address that the backend is reverse proxied by nginx. nginx log format is different from the backend log format but it is widely-known. An example log is - - [20/Oct/2021:12:12:58 +0000] "GET /api/jobs/status-codes HTTP/1.1" 200 385 "http://localhost:30001/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:93.0) Gecko/20100101 Firefox/93.0" "-" Backend is a WSGI app and here is an example log [pid: 13] (admin 1) [20/Oct/2021:12:13:01 +0000] apis.ns_users INFO User 'admin' listed all the users. We also log the user ID after the user name because it is possible to change the user name. However, the logs that come from uWSGI do not log the user ID. [pid: 13] (-) [20/Oct/2021:12:11:49 +0000] uwsgi INFO {36 vars in 401 bytes} [Wed Oct 20 12:11:49 2021] GET /api/version => generated 72 bytes in 6 msecs (HTTP/1.1 200) 3 headers in 108 bytes (1 switches on core 0)

Pro Tip for Administrators:

You can change log format for containers to include milliseconds in the timestamps.

The following containers accept the new GMS_RUNNER_LOG_SHOW_MSEC environment variable to add milliseconds to timestamps in log entries:

  • Broker*
  • Worker
  • Cleaner
  • Hypercube Unpacker
  • Hypercube Appender
  • Dependency Checker
  • Job Watcher
  • Job Spawner
  • Job Cleaner
  • Job Canceler
Setting GMS_RUNNER_LOG_SHOW_MSEC environment variable to 'true' will change the logging format. Not setting it or setting it to anything else will have no effect. For example, a log from the worker container: [17/10/2021 14:58:47] INFO #: Start consuming on signal queue Would become: [17/10/2021 14:58:47.084] INFO #: Start consuming on signal queue

* Changing the logging format of the Broker(REST API) does not change logging format of nginx and the logs that come from uwsgi will have 000 in the milliseconds because it is not supported. However, the logs that come from REST API will have milliseconds.


Instead of polling Engine for specific events, such as when a particular job has finished, Engine also supports sending this information in the form of webhooks: HTTP POST requests to a specified URL. Per default, webhooks are disabled. However, you can enable it either for administrators only or for everyone via the configuration API. A webhook configuration consists of the following components:


The URL that receives the payload of the HTTP POST request.

Content Type

The media type used to serialize the data. Possible values are form (default) and json. When choosing form, the content type will be set to application/x-www-form-urlencoded, when choosing json, the content type will be application/json.


If a secret is specified, an additional header: X-ENGINE-HMAC is sent, which contains the HMAC (SHA-256) of the (raw) request body. The secret must be at least 8 characters long. Below is an example of HMAC validation in Node.js:

const crypto = require('crypto');
const validateHmac = (requestHeaders, requestBody, secret) => {
  if (!('x-engine-hmac' in requestHeaders)) {
    return false;
  const hmac = crypto.createHmac('sha256', secret);
  const signature = Buffer.from(requestHeaders['x-engine-hmac'], 'utf8');
  const digest = Buffer.from('sha256=' + hmac.update(requestBody).digest('hex'), 'utf8');
  return crypto.timingSafeEqual(digest, signature);

When the recursive flag is set, the webhook applies not only to events of the webhook owner, but also to all her invitees. If the webhook owner is an administrator, a recursive webhook applies to each user's events.


The events to subscribe to. A list of possible events and their payloads is given below. The special ALL event can be specified to subscribe to all events.

Insecure SSL

If this flag is set, no SSL certificate validation is performed when the webhook is sent over SSL. We strongly recommend that you do NOT disable certificate validation for security reasons.

Webhook events

The request body of a webhook always has the same structure: a JSON-encoded object with the fields: event (string), which specifies the name of the event, username (string), which specifies the name of the user who registered the webhook, payload (object), which contains the payload of the event as well as text (string) and content (string) that both contain a human-readable representation of the event (useful, for example, for getting notified via business communication tools such as Slack, Microsoft Teams, Discord). The payloads for all supported event types are described in detail below.

In case the content type of the webhook is set to form, the entire JSON payload is sent as a form parameter called payload.

An example of a webhook payload of a JOB_FINISHED event is shown below:

  "event": "JOB_FINISHED",
  "username": "shanghai",
  "text": "GAMS Engine job of the user: 'shanghai' and token: '46bbf80d-262f-4257-86ee-4881164d6b0c' finished.",
  "content": "GAMS Engine job of the user: 'shanghai' and token: '46bbf80d-262f-4257-86ee-4881164d6b0c' finished.",
  "payload": {
    "token": "46bbf80d-262f-4257-86ee-4881164d6b0c",
    "model": "trnsport",
    "is_temporary_model": true,
    "is_data_provided": false,
    "status": 10,
    "process_status": 0,
    "stdout_filename": "log_stdout.txt",
    "namespace": "global",
    "arguments": [
    "submitted_at": "2022-03-23T09:43:42.783211",
    "finished_at": "2022-03-23T09:43:43.056074",
    "user": {
      "username": "shanghai",
      "deleted": false,
      "old_username": null


This event is triggered when a Hypercube job is finished.

Webhook payload object

Key Type Description
token string The token of the Hypercube job
model string The name of the model
is_temporary_model boolean Whether the solved model is a temporary model or a registered model
is_data_provided boolean Whether additional data was provided with the model
status integer The Engine status of the job. The Engine status can be queried via GET /jobs/status-codes
namespace string The namespace in which the job was executed.
arguments array Additional command line arguments defined for the job.
submitted string The date and time of the start of the job (ISO 8601).
finished string The date and time of completion of the job (ISO 8601).
job_count integer Number of jobs that make up this Hypercube job.
completed integer Number of jobs whose execution has been completed.
user object User who submitted this Hypercube job.

Webhook payload example

  "token": "332f6438-6485-416c-b893-089524259153",
  "model": "trnsport",
  "is_temporary_model": true,
  "is_data_provided": false,
  "status": 10,
  "namespace": "global",
  "arguments": [
  "submitted": "2022-03-17T17:01:37.406000+00:00",
  "finished": "2022-03-17T17:01:39.878000+00:00",
  "job_count": 2,
  "completed": 2,
  "user": {
    "username": "admin",
    "deleted": false,
    "old_username": null


This event is triggered when a job (not a Hypercube job) is finished.

Webhook payload object

Key Type Description
token string The token of the job
model string The name of the model
is_temporary_model boolean Whether the solved model is a temporary model or a registered model
is_data_provided boolean Whether additional data was provided with the model
status integer The Engine status of the job. The Engine status can be queried via GET /jobs/status-codes
process_status integer The exit status of the GAMS process.
stdout_filename string The name of the file to which the stdout of the GAMS process is written.
namespace string The namespace in which the job was executed.
arguments array Additional command line arguments defined for the job.
submitted_at string The date and time of the start of the job (ISO 8601).
finished_at string The date and time of completion of the job (ISO 8601).
user object User who submitted this job.

Webhook payload example

  "token": "332f6438-6485-416c-b893-089524259153",
  "model": "trnsport",
  "is_temporary_model": true,
  "is_data_provided": true,
  "status": 10,
  "process_status": 0,
  "stdout_filename": "log_stdout.txt",
  "namespace": "global",
  "arguments": [
  "submitted_at": "2022-03-17T17:01:37.406000+00:00",
  "finished_at": "2022-03-17T17:01:39.878000+00:00",
  "user": {
    "username": "admin",
    "deleted": false,
    "old_username": null