receiveFromHabitat

Receive data from Habitat, which is a SCADA application from GE Vernova that contains real-time data.

The data is retrieved from the Habitat Sampler application.

This source processor is configured with one or more queries. Each query identifies one or more fields from habitat records. Data will be fetched from one or more field elements depending on the key specification of the query.

The query format is based on the format specified for the HABConnect Dynamic Data Exchange server (HABDDE), and should be familiar to anyone that have used HABDDE to integrate Habitat data into Excel spreadsheets or other applications with DDE support.

Documentation of the specific queries supported by this processor is provided here, but it is highly recommend to also consult the official Habitat documentation for more details. Particularly the HABDDE documentation. GE Vernova customers with a maintenance contract can access the documentation through the ServiceNow portal.

Connection and failover

The connection to Habitat is established by providing a username and a list of the hosts and ports of at least one Habitat Sampler endpoints:

receiveFromHabitat {
    id = "habitat-data"

    username = "CONNECT"
    endpoint("PROD-HAB-A", 8040)
    endpoint("PROD-HAB-B", 8040)
}

The processor will start by connecting to the first Sampler in the list and then fail over between endpoints until an instance with primary role is found. Failover is also triggered at any point when connection is lost or the sampler instance gets a non-primary role.

Queries

A Habitat query identifies the database where data is located and which records and fields to fetch data from. It can also be configured with additional options providing further instructions about the data and how to fetch it.

The database is identified by providing the database, application, and family names. To start defining a query you use the database builder and provide that information. In this example we identify the SCADAMOM database in the SCADA application in the EMS family.

receiveFromHabitat {
    id = "habitat-data"
    username = "CONNECT"
    endpoint("PROD-HAB-A", 8040)

     database {
         database = "SCADAMOM"
         application = "SCADA"
         family = "EMS"
     }
}

Then you define one or more queries for specific fields by using the query builder.

query references one or more fields and takes the following properties:

  • id: A unique identifier for the query. The data produced by the query will be tagged with this identifier, enabling you to distinguish between different queries when the data is processed. Required.

  • fields: The comma separated qualified names of the fields to fetch data from. Has the form <field>_<record>, <field>_<record>.... E.g. DIS_ANALOG, ID_ANALOG, TPOFLAGS_ANALOG. Fields can also be specified with field options such as /DEADBAND=n%. E.g. DIS_ANALOG/DEADBAND=5%, ID_ANALOG. Required.

  • key: Expression selecting the record(s) to return field values for. Can be a key value, composite key, wildcard, range etc. The key is sent to the Sampler as-is. See the Habitat HABDDE documentation chapter "Specifying the Item" for more information on specifying the key. Required.

  • options: String of item options such as /RATE=n and /PERIODIC. Multiple options are simply concatenated together. E.g. /RATE=10/PERIODIC/CIRCULAR. See the Habitat HABDDE documentation chapter "Using Optional Qualifiers" for more information on the available options. Most options are just forwarded to the Sampler as-is, but this processor does not support permanent requests and will fail if options related to that is configured: /PERMANENT=name& /NOPERMFILE. Optional.

  • processingHint(): Builder function that adds a processing hint regarding fields. Hints does not affect the behavior of the Habitat Sampler, but provide additional information on how values should be processed on the client side. Multiple fields can be comma separated. For example processingHint(BIT_CONTAINER, "TPOFLAGS_ANALOG, TGFLAG"). You can also call the builder function multiple times with single fields if you prefer that. Optional.

receiveFromHabitat {
    id = "habitat-data"
    username = "CONNECT"
    endpoint("PROD-HAB-A", 8040)

    database {
        database = "SCADAMOM"
        application = "SCADA"
        family = "EMS"

        query {
            id = "all-dis-analog"
            fields = "DIS_ANALOG"
            key = "*"
            options = "/RATE=1/PERIODIC"
        }

        query {
            id = "state-analog"
            fields = "DIS_ANALOG/DEADBAND=5%, ID_ANALOG, TPOFLAGS_ANALOG"
            key = "st0044.*.*.*.*"
            options = "/RATE=1"
            processingHint(BIT_CONTAINER, "TPOFLAGS_ANALOG")
        }
    }
}

For cases where it is impractical to define all queries in the configuration language, there is also an option to load the queries as CSV data using the loadQueries function:

receiveFromHabitat {
    id = "habitat-data"
    username = "CONNECT"
    endpoint("PROD-HAB-A", 8040)

    loadQueries(File("habitat-queries.csv").reader())
}

The data contains one line per query.

The query fields are separated by semicolons and the columns are as follows:

<id>;<fields>;<database>;<application>;<family>;<key>;<options>;<processingHints>`

Just like for the query builder, options and processingHints are optional. You still need to provide the semicolon separator for the optional fields, but the values can be empty.

processingHints has the format <hint>: <field>, <field>.... For example BIT_CONTAINER: TPOFLAGS_ANALOG, TGFLAG.

Example CVE line with all fields:

state-analog;DIS_ANALOG/DEADBAND=5%, ID_ANALOG, TPOFLAGS_ANALOG;SCADAMOM;SCADA;EMS;st0044.*.*.*.*;/RATE=1/PERIODIC;BIT_CONTAINER: TPOFLAGS_ANALOG

Example CVE line with only required fields:

all-dis-analog;DIS_ANALOG;SCADAMOM;SCADA;EMS;*;;

Data payload

When the processor starts it will fetch the current state of all the queried field elements. After that, by default, it will fetch any updated field elements on a specific frequency. The detailed behaviour of the data fetching is configured by the user by providing query options.

We call the data deliveries containing the full current data set matching a query a "snapshot", and the subsequent updates "delta" deliveries.

A full snapshot might be delivered at any time, not only when the query is started. Any changes causing the structure of the record extent to change, such as having a new record inserted, will trigger a snapshot delivery. Also, the queries can be restarted due to events in the Connect cluster, such as rebalancing or restarts.

The data will be delivered on Connect JSON compliant format with the following content.

A snapshot contains a list with an entry for each record matching the query. Each record is a list of field values in the order the fields were specified in the query. In the example below, we get a snapshot of the values for the fields DIS_ANALOG, ID_ANALOG, and TPOFLAGS_ANALOG for the records matching the configured key.

{
  "id": "state-analog",
  "scope": "snapshot",
  "fields": ["DIS_ANALOG", "ID_ANALOG", "TPOFLAGS_ANALOG"],
  "data": [
     [68.30430603027344, "MVAR", [true, false, false, false, false, false, false, false],
     [0.28012818,        "MW",   [true, false, false, false, false, false, false, false],
     [2.1668012,         "MVA",  [true, false, false, false, false, false, false, false]
  ]
}

A delta contains a list of changed data, grouped under the field they belong to. The record the data belongs to is identified by the record subscript. The subscript identifies the record in the latest snapshot data. In the example below we receive updates on the data in first and third record in the snapshot above.

{
  "id": "state-analog",
  "scope": "delta",
  "data": {
    "DIS_ANALOG":      {"1": 68.30430603063364, "3": 2.1663567},
    "TPOFLAGS_ANALOG": {"3": [false, false, false, false, false, false, false, false]}
  }
}

Data types

The following Habitat field data types are supported. The data is represented as the smallest compatible JSON compliant JVM type.

  • Integer: Integer numbers of byte size 1, 2, and 4, represented as Byte, Short and Integer JVM types respectively.

  • Floating-Point: Real numbers of byte size 4, and 8, represented as Float and Double JVM types respectively.

  • Boolean: Boolean values corresponding to true, and false of the JVM Boolean type.

  • Character String: Fixed length character string. Represented as JVM String type with the padding characters trimmed off.

  • Time: Date and time of byte size 4 or 8 on second granularity. Represented as a JVM String on ISO-8601 compliant zoned date time format, in UTC time zone. Example: 2024-07-19T09:12:57Z. Note that this source processor will enforce UTC time by using the (currently undocumented), TIME option: /TIME=UTC. This option will be added automatically to the queries. The query will be rejected if the user specifies the TIME option with another value.

  • Bit Container: A bit container of byte size 1, 2, or 4. Represented as a JVM List of Boolean types. This list is always right-padded to match the byte size of the bit container field, 8 bits per byte. So, the list will always be 8, 16 or 32 entries long, but only the first n entries are relevant, where n is the declared size of the bit container field. The remaining entries are always false.

  • UUID: RFC 4122 confirming UUID. Represented as a JVM String of length 36. Example: ae8f3cce-87d5-4857-9d59-6dd662b1a88e.

Note that data that are nullable in Habitat, can also be represented as null, in the JSON compliant data.

Flow design

This is a singleton source processor, meaning that there will only be one instance of it in the Connect cluster. If you need to ingest large amounts of Habitat data you should distribute your queries across multiple processors. This will ensure a good distribution of the data load across the Connect cluster.

Notes

This is currently considered an experimental feature and is subject to breaking changes.

This processor has only been tested on Habitat 5.11SP2. It may work on other versions, but this is not guaranteed.

Sampler features currently not supported:

  • /PERMANENT processing.

  • Multi-dimensional fields.

  • SSL and mTLS

Properties

Name Summary

endpoint()

Add a Habitat endpoint to connect to.

The order of the endpoint definitions is significant. Connections and failover will be performed in the order the endpoints are defined.

database()

Add one or more queries for a habitat database.

loadQueries()

Load queries as CSV data. See the main receiveFromHabitat documentation for the expected format and other details.

username

The username to use for the Habitat connection.

connectionTimeoutMillis

The timeout of the HTTP client socket connection. Optional.

receiveTimeoutMillis

The socket timeout to wait for the first byte of response from the server. Optional.

name

Optional, descriptive name for the processor.

id

Required identifier of the processor, unique across all processors within the flow. Must be between 3 and 30 characters long; contain only lower and uppercase alphabetical characters (a-z and A-Z), numbers, dashes ("-"), and underscores ("_"); and start with an alphabetical character. In other words, it adheres to the regex pattern [a-zA-Z][a-zA-Z0-9_-]{2,29}.

exchangeProperties

Optional set of custom properties in a simple jdk-format, that are added to the message exchange properties before processing the incoming payload. Any existing properties with the same name will be replaced by properties defined here.

retainPayloadOnFailure

Whether the incoming payload is available for error processing on failure. Defaults to false.

Sub-builders

Name Summary

externalSystemDetails

Strategy for describing the external system integration. Optional.

messageLoggingStrategy

Strategy for describing how a processor’s message is logged on the server.

payloadArchivingStrategy

Strategy for archiving payloads.

inboundTransformationStrategy

Strategy that customizes the conversion of an incoming payload by a processor (e.g., string to object). Should be used when the processor’s default conversion logic cannot be used.