Configuration options

This page describes how to configure the runtime behavior of the NebulaStream Coordinator and Workers.

The NebulaStream Coordinator and Worker provide two ways to configure their runtime behavior.

  1. Setting configuration options on the command line.
  2. Setting configuration options in a YAML configuration file.

❗ For the NebulaStream Coordinator, configuration setting is specified on the command line take precedence over the same option in the YAML configuration file.

❗ For the NebulaStream Worker, command line options do not override the settings in the YAML configuration file.

❗ For most configuration options, the key is always the same, regardless whether it is specified on the command line or in a YAML configuration file. The exception is the configuration of logical sources in the NebulaStream coordinator and physical sources in the NebulaStream worker.

In this page, we describe the general way how to set configuration options on the command line or in a YAML configuration file. We also describe every configuration option and provide their default values.

Configuration on the Command Line

If the NebulaStream Coordinator or Worker are started without any command line parameters, all configuration options are initialized to their default values.

To set the configuration option key to the value value, use the syntax --key=value.

💡 In the following example, the configuration option logLevel is set to LOG_INFO:

nesCoordinator --logLevel=LOG_INFO

Configuration in a YAML file

It is also possible to create a YAML configuration file. To set configuration options set from this file, use the command line parameter configPath.

💡 In the following example, configuration options are set using the contents of the file coordinator.yml:

nesCoordinator --configPath=coordinator.yml

We provide templates of the YAML configuration files in the NebulaStream Github repository:

Coordinator Configuration Options

The configuration options of the Coordinator configure network settings, the NebulaStream optimizer, logical sources, and enable experimental features.

General Coordinator Configuration

The following table lists general configuration options of the NebulaStream Coordinator in alphabetical order.

KeyDefault valueDescription
bufferSizeInBytes4096The size of individual TupleBuffers in bytes. This property has to be the same over a whole deployment.
configPathNo defaultPath to a YAML configuration file.
coordinatorIp127.0.0.1Coordinator RPC server IP address.
dataPort4001Coorindator data server TCP port. Used to receive data at the coordinator.
enableMonitoringfalseEnable monitoring.
enableQueryReconfigurationfalseEnable reconfiguration of running query plans.
logLevelLOG_DEBUGThe detail of log messages. Possible values are: LOG_NONE, LOG_WARNING, LOG_DEBUG, LOG_INFO, or LOG_TRACE.
numWorkerThreads1The number of worker threads.
numberOfBuffersInGlobalBufferManager1024The number of buffers in the global buffer manager. Controls how much memory is consumed by the system.
numberOfBuffersInSourceLocalBufferPool64The number of buffers in source local buffer pool. Indicates how many buffers a single data source can allocate. This property controls the backpressure mechanism as a data source that can’t allocate new records can’t ingest more data.
numberOfBuffersPerWorker128The number of buffers in task local buffer pool. Indicates how many buffers a single worker thread can allocate.
numberOfSlots65535The number of slots define the amount of computing resources that are usable at the coordinator. This enables the restriction of the amount of concurrently deployed queries and operators.
restIp127.0.0.1Coordinator REST server IP address.
restPort8081Coordinator REST server TCP port.
rpcPort4000Coordinator RPC server TCP port. Used to receive control messages.

Optimizer configuration

The following table lists configuration options of the NebulaStream optimizer in alphabetical order. These configuration options begin with the prefix optimizer..

KeyDefault valueDescription
optimizer.distributedWindowChildThreshold2Threshold for the distribution of window aggregations. Indicates the number of child operators from which a window operator is distributed.
optimizer.distributedWindowCombinerThreshold4Threshold for the insertion of pre-aggregation operators. Indicates the number of child nodes from which on we will introduce combine operator between the pre-aggregation operator and the final aggregation.
optimizer.memoryLayoutPolicyFORCE_ROW_LAYOUTIndicates the memory layout policy and allows the engine to prefer a row or columnar layout.

Possible values are:

  • FORCE_ROW_LAYOUT: Enforces a row layout between all operators.
  • FORCE_COLUMN_LAYOUT: Enforces a column layout between all operators.
    optimizer.performAdvanceSemanticValidationfalsePerform advance semantic validation on the incoming queries.
    ❗ This option is set to false by default as currently not all operators are supported by Z3 based signature generator. Because of this, in some cases, enabling this check may result in a crash or incorrect behavior.
    optimizer.performDistributedWindowsOptimizationtrueEnables the distribution of window aggregations. This optimization will enable the distribution of window aggregation across multiple nodes. To this end, the optimizer will create pre-aggregation operators that are located close to the data source.
    optimizer.performOnlySourceOperatorExpansionfalsePerform only source operator duplication when applying Logical Source Expansion Rewrite Rule.
    optimizer.queryBatchSize1The number of queries to be processed together.
    optimizer.queryMergerRuleDefaultQueryMergerRuleThe rule to be used for performing query merging. Valid options are: SyntaxBasedCompleteQueryMergerRule, SyntaxBasedPartialQueryMergerRule, Z3SignatureBasedCompleteQueryMergerRule, Z3SignatureBasedPartialQueryMergerRule, Z3SignatureBasedPartialQueryMergerBottomUpRule, HashSignatureBasedCompleteQueryMergerRule, ImprovedStringSignatureBasedCompleteQueryMergerRule, ImprovedStringSignatureBasedPartialQueryMergerRule, StringSignatureBasedPartialQueryMergerRule, DefaultQueryMergerRule, HybridCompleteQueryMergerRule.

    Logical Sources Configuration

    Logical sources can only be configured in the YAML configuration file and not on the command line. That is because it is not possible to define multiple logical sources on the command line.

    A logical source is defined by a name (logicalSourceName) and a schema. The schema consists of a number of fields that also have a name and a type. Valid types are: INT8, UINT8, INT16, UINT16, INT32, UINT32, INT64, FLOAT32, UINT64, FLOAT64, BOOLEAN, and CHAR. The CHAR type also requires a length value. The type FLOAT32 is represented as a single precision, and the type FLOAT64 is double precision.

    💡 The example below shows how to define a logical source with the name default_logical and a schema consisting of the fields id, value, and char_value.

    logicalSources:
      - logicalSourceName: "default_logical"
        fields:
          - name: "id"
            type: "UINT32"
          - name: "value"senso
            type: "UINT64"
          - name: "char_value"
            type: "CHAR"
            length: 5
    

    Worker Configuration Options

    The configuration options of the Worker configure network settings, the NebulaStream query compiler, physical sources, and enable experimental features.

    General Worker Configuration

    The following table lists general configuration options of the NebulaStream Worker in alphabetical order.

    KeyDefault valueDescription
    bufferSizeInBytes4096The size of individual TupleBuffers in bytes. This property has to be the same over a whole deployment.
    configPathNo defaultPath to a YAML configuration file.
    coordinatorIp127.0.0.1Coordinator RPC server IP address.
    coordinatorPort4000Coordinator RPC server TCP port. Needs to the same as rpcPort in Coordinator.
    dataPort0Data server TCP port of this worker. Used to receive data. A value of 0 means that the port is selected automatically.
    enableMonitoringfalseEnable monitoring.
    localWorkerIp127.0.0.1IP of the Worker.
    locationCoordinatesNo defaultCoordinates of the physical location of the worker.
    logLevelLOG_DEBUGThe detail of log messages. Possible values are: LOG_NONE, LOG_WARNING, LOG_DEBUG, LOG_INFO, or LOG_TRACE.
    numWorkerThreads1The number of worker threads.
    numaAwarenessfalseEnables support for Non-Uniform Memory Access (NUMA) systems.
    numberOfBuffersInGlobalBufferManager1024The number of buffers in the global buffer manager. Controls how much memory is consumed by the system.
    numberOfBuffersInSourceLocalBufferPool64The number of buffers in source local buffer pool. Indicates how many buffers a single data source can allocate. This property controls the backpressure mechanism as a data source that can’t allocate new records can’t ingest more data.
    numberOfBuffersPerWorker128The number of buffers in task local buffer pool. Indicates how many buffers a single worker thread can allocate.
    numberOfQueues1The number of processing queues.
    numberOfSlots65535The number of slots define the amount of computing resources that are usable at this worker. This enables the restriction of the amount of concurrently deployed queries and operators.
    numberOfThreadsPerQueue0Number of threads per processing queue.
    parentId0The ID of this node’s parent in the NebulaStream IoT network topology.
    physicalSources
    queryManagerModeDynamicThe mode in which the query manager is running.

    • Dynamic: Only one queue overall.
    • Static: Use queue per query and a specified number of threads per queue.
    queuePinListNo defaultPins specific worker threads to specific queues.
    ❗ This setting is deprecated and will be removed.
    rpcPort0Worker RPC server TCP port. Used to receive control messages. A value of 0 means that the port is selected automatically.
    sourcePinListNo defaultPin specific data sources to specific CPU cores.
    ❗ This setting is deprecated and will be removed.
    workerPinListNo defaultPin specific worker threads to specific CPU cores.
    ❗ This setting is deprecated and will be removed.

    Query Compiler Configuration

    The following table lists configuration options of the NebulaStream query compiler in alphabetical order. These configuration options begin with the prefix queryCompiler..

    KeyDefault valueDescription
    queryCompiler.pipeliningStrategyOPERATOR_FUSIONIndicates the pipelining strategy for the query compiler. Possible values are: OPERATOR_FUSION or OPERATOR_AT_A_TIME.
    queryCompiler.compilationStrategyOPTIMIZEIndicates the optimization strategy for the query compiler. Possible values are: FAST, DEBUG or OPTIMIZE.
    queryCompiler.outputBufferOptimizationLevelALLIndicates the OutputBufferAllocationStrategy. Possible values are: ALL, NO, ONLY_INPLACE_OPERATIONS_NO_FALLBACK, REUSE_INPUT_BUFFER_AND_OMIT_OVERFLOW_CHECK_NO_FALLBACK, REUSE_INPUT_BUFFER_NO_FALLBACK, OR OMIT_OVERFLOW_CHECK_NO_FALLBACK.
    queryCompiler.windowingStrategyDEFAULTIndicates the windowingStrategy. Possible values are: DEFAULT, THREAD_LOCAL.

    Physical Sources Configuration

    Physical sources can be defined both on the command line and also in the YAML configuration file.

    ❗ On the command line, we can only define a single physical source. In contrast, in the YAML configuration file, we can define multiple physical sources.

    The following table lists the configuration options that have to be specified for every physical source. The configuration options for physical sources begin with the prefix physicalSource..

    KeyDefault valueDescription
    physicalSource.logicalSourceNameNo defaultThe name of the logical source to which this physical source belongs.
    physicalSource.physicalSourceNameNo defaultThe name of this physical source.
    physicalSource.typeNo defaultThe type of this physical source. See below for a description of the types.

    NebulaStream supports the following physical sources types:

    • BinarySource: Reads data from a binary file.
    • CSVSource: Reads data from a CSV file and repeats the data multiple times.
    • KafkaSource: Reads data from a Kafka broker.
    • MQTTSource: Reads data from a MQTT broker.
    • MaterializedViewSource: Read from a materialized view.
    • OPCSource: Reads data from an OPC server.

    These source types require additional configuration options which we describe below.

    BinarySource

    A BinarySource can be configured with the following configuration options.

    KeyDefault valueDescription
    physicalSource.filePathNo defaultRequired. The path to the binary file that should be read.

    CSVSource

    A CSVSource can be configured with the following configuration options.

    KeyDefault valueDescription
    physicalSource.delimiter“,”The delimiter between the values of a record.
    physicalSource.filePathNo defaultRequired. The path to the CSV file that should be read.
    physicalSource.numberOfBuffersToProduce1Number of buffers to produce.
    physicalSource.numberOfTuplesToProducePerBuffer1Number of tuples to produce per buffer.
    physicalSource.skipHeaderfalseSkip first line of the file.
    physicalSource.sourceGatheringInterval1Gathering interval of the source.

    KafkaSource

    A KafkaSource can be configured with the following configuration options.

    KeyDefault valueDescription
    physicalSource.autoCommit1Boolean value where 1 equals true and 0 equals false.
    physicalSource.brokersNo defaultKafka brokers.
    physicalSource.connectionTimeout10Connection timeout for source.
    physicalSource.groupIdtestGroupUser name.
    physicalSource.topictestTopicTopic to listen to.

    MQTTSource

    A MQTTSource can be configured with the following configuration options.

    KeyDefault valueDescription
    physicalSource.cleanSessiontrueIf true, clean up session after client loses connection. If false, keep data for client after connection loss (persistent session).
    physicalSource.clientIdtestClientClient ID. Needs to be unique for each connected MQTTSource.
    physicalSource.flushIntervalMS-1TupleBuffer flush interval in milliseconds.
    physicalSource.inputFormatJSONInput format. Possible values are: JSON or CSV.
    physicalSource.urlws://127.0.0.1:9001URL to connect to.
    physicalSource.qos2Quality of service.
    physicalSource.userNametestUserUser name. Can be chosen arbitrarily.

    MaterializedViewSource

    ❗ This source type is experimental.

    A MaterializedViewSource can be configured with the following configuration options.

    KeyDefault valueDescription
    physicalSource.materializedViewId1The id of the materialized view to read from.

    OPCSource

    An OPCSource can be configured with the following configuration options.

    KeyDefault valueDescription
    physicalSource.namespaceIndex1Namespace index of the node.
    physicalSource.nodeIdentifierthe.answerNode identifier.
    physicalSource.passwordNo defaultPassword.
    physicalSource.userNametestUserUser name.