Construct User Guide

Construct User Guide


Construct is a constructive simulation toolkit that adds synthetic communications to training systems such as flight simulators, Serious Games, and command and control environments. Construct adds voice and radio capabilities to simulated entities and game avatars so that trainees can interact with them verbally. Entities are augmented with a simulated radio that transmits and receives on a specified communications net. Construct also enables face-to-face communication in 3D game environments between avatars and human players.

Construct's resources are configured using the Voisus web interface or programmatically with its HTTP API.

Construct Features

  • Simulated radios (DIS/HLA)
  • Text-To-Speech (TTS)
  • Automated Speech Recognition (ASR)
  • Sound file playback
  • Face-to-face communications in 3D environments
  • HTTP API for realtime control and status reporting
  • Web interface for remote viewing and configuration
  • Integrated with Voisus Scenarios and Comm Plans
  • DIS Entity attach
  • Entity behavior modeling tools
  • Natural language processing
  • Radio effects and background sound layering

Each application may use a different combination of these features. Applications needing only simple background radio chatter, for example, may use the Construct web interface to quickly script and then trigger radio transmissions on demand. Higher fidelity and more automated training systems may call for dozens of intelligent voice agents that listen, think, and respond to voice messages from human players using a combination of ASR, automated behaviors, and TTS.

Construct is used both standalone and as part of large, integrated training systems. When integrated into a larger training system, Construct's HTTP API and the DIS network are the two mechanisms for interoperation. For example, a simulation host computer can issue HTTP requests to Construct in order to trigger TTS transmissions from simulation entities, which are then heard by all human trainees with in-tune radios in the exercise.

Speech Licenses

If your application requires text-to-speech (TTS) or automatic speech recognition (ASR), one or more speech licenses must be installed on the Voisus server. Visit the Licensing page to confirm that the expected number of speech streams are enabled.

Select the Speech tab and click "Choose File" to upload a new license file.

The number of TTS streams determines the number of speech events that can be processed simultaneously. For example, if three entities need to talk at the same time, then three TTS streams are needed.

Licenses are included in system backups via Backup/Restore.

[]: /support/public/graphics/construct/backup_restore_page.png height=318px width=800px

Scenario Management

A Scenario contains all the information needed for a specific training task or simulation, such as Entities, Interactions, Behaviors, Sounds, and a Comm Plan. You can add these resources to a Scenario via the Voisus web interface or the HTTP API. This section describes the management of Scenarios in the Voisus web interface.

Access Construct's Scenario Management by clicking the Construct icon on the start page then pressing the scenarios_button button.

On the Scenario Management page, Scenarios are added, copied, deleted, renamed, and run as needed.

Without a Scenario running, the Voisus server is almost completely idle. Running a Scenario brings to life the sounds, speech, and communications stored within. Because some Scenarios are long-running, it is important to note that Scenarios can be edited on the fly and changes take effect immediately (unless otherwise noted). When the Voisus server is rebooted, the Scenario running at shutdown will be started again on boot.

Voisus servers may hold an unlimited number of Scenarios and can run one Scenario at a time. These Scenarios can be created, viewed, modified, and run by all system users logged on to that particular server.

Create a New Scenario

  1. Select Empty Scenario or one of the Example Scenarios (such as Construct_Example) from the drop-down menu. Example Scenarios are pre-loaded with resources that can be easily reconfigured.
  2. Click + Add New Scenario.
  3. Change the Scenario's name by clicking on it.
  4. Run the Scenario.
  5. Open the Scenario to edit its resources, such as Entities, Interactions, Sounds, and Comm Plan. This configuration can be performed on Scenarios that are running as well as Scenarios that are stopped.

Comm Plan

The Comm Plan is a collection of virtual nets that represent communications channels or frequencies for radios or intercoms. Entities may have a radio with a configured net. The net determines which Entities and human players can communicate.

  • Nets define the frequency, modulation type, bandwidth, and crypto settings for the radios.
  • At a minimum, nets require a valid frequency and waveform to work properly.
  • Only Entities and Voisus Clients with matching net settings are able to communicate.
  • If using speech recognition, ensure the transmitting radio is using a high quality waveform, otherwise recognition accuracy will be reduced. Use PCM encoding and a 16 KHz sampling rate for best results. In some situations, this may require changing the radio settings on a remote system.


Communications modeling in Construct centers on simulation entities that represent aircraft, ground vehicles, or other agents in the environment. Each entity is given a voice and radio to enable communication with each other and with human players. Hundreds of entities can be modeled by a single Construct instance to support large simulation events with a small footprint. Construct Entities may be created, modified, and deleted on demand using both the web interface and the HAPI.

The Voisus web interface is the most commonly used approach during initial setup and for simple applications. Using a web browser, an administrator or scenario designer can quickly create a handful of Entities definitions with customized voice and radio settings. These Entities definitions are stored in the Scenario and the corresponding runtime Entity instances are created when the Scenario is installed and destroyed when the Scenario is uninstalled. Configuration changes, such as changing an Entity's communication net, take effect immediately while the Scenario is running.

The HAPI, on the other hand, supports low-level, high-performance, and dynamic control of entities. Any client of the API is able to programmatically manage Entity definitions in the Scenario and interact directly with the runtime Entity instances. The HAPI is also the primary means to integrate Construct with external simulator systems. More information on the HTTP API is available here.

Construct entities follow standard radio protocol to avoid ‘stepping on’ radio transmissions from other entities or human players on the network. When an Entity plans to speak, it will wait for the communication net to be idle before transmitting.

Entity Attributes Overview

Although Entities have many attributes, many are optional, and only a few are needed for an Entity to talk on a radio. At a minimum, the Entity will need a Domain, Net, and TTS Voice set in order for it to communicate. The Domain and Net determine which DIS exercise and radio frequency the Entity will communicate on, respectively. Entity attributes are editable both on the Entity webpage and using the HAPI.

Monitoring Entities

For a realtime view of the running entities, visit the Construct status page.

Here you can monitor all entities running on the server, including entities created in the Voisus web interface and those created using the runtime Entity API. The status page also features a text field to type in and trigger speech from an Entity on the fly. Each time an Entity receives or transmits, the activity will be displayed on the webpage. This runtime status information is also accessible via the HAPI.

Entity Action

Entities are spurred into action in one of several ways:

  1. Interactions script a fixed sequence of radio calls between one or more Entities.

  2. Behaviors define Entity decision making and voice responses.

  3. The HAPI allows you to use a programming language of your choice to remotely trigger Entity behaviors and speech.

If you are unsure of which route to take, we recommend starting with Interactions and going from there.

Construct also works with Discovery Machine Behavior Modeling Console and other third party software using the HAPI.

Creating an Entity

  1. Open the Entities page.
  2. Click the + button to create a new Entity.
  3. Click the name of the new Entity to edit it.


  1. Name: Name of this Entity.

  2. Comms: The Comms section creates a simulated radio for this Entity.

    • Domain: set the DIS domain for this Entity. Communication with other Entities or other DIS-compatible systems is contingent on matching DIS exercise IDs (as well as other DIS settings).

    • Net: select a virtual net for this Entity. The drop-down displays all nets available in the Scenario's Comm Plan. If the list is empty, visit the Comm Plan page and create nets.

    • Radio Effects: Optional. Select an audio effect for this simulated radio, if desired. When applied, these effects degrade the audio quality for increased realism.

  3. Position: Optional. Enter a marking string if the Entity is to use propagation for radio ranging.

    • Marking: Enter a string here to associate this Entity with a DIS Entity on the network.
      • If using VBS2, enter the URN marking field of the object you wish to attach to.


  1. Text-to-Speech: Configure these settings if the Entity is going to use synthetic speech.

    • Voice: Select the voice this Entity will use. If the drop-down list is empty, upload TTS Voices.
    • Rate: Adjust the pace of the TTS voice. 1 is slowest; 9 is fastest.
    • Volume: Adjust the TTS volume level. 1 is quietest; 9 is loudest.
  2. Audio Controls: Raise or lower the pitch of the synthetic voice. This is a good way to expand the number of distinct voices available. The same TTS voice can be used by different Entities with the pitch shifted to appear as though it is two different voices.

    • Pitch Shift: Adjust the values by 0.1 at a time. The best range for adjustment is 0.8 to 1.2 and the effects may be limited by the voice chosen. The default value of 1.0 means no pitch shifting will take place.


  1. Behavior: Assign an intelligent Behavior to this Entity.
  2. Language Model: Assign a speech recognition language model to this Entity. "Listen" must be checked below to activate speech recognition. When a behavior or Artificial Intelligence Markup Language (AIML) definition is also assigned to the Entity, it can automatically respond to verbal commands.
  3. Language Parser: A language parser extracts the meaning from the speech once it has been recognized. Call signs, waypoints, and other variables can be extracted. Currently language parsers are created by ASTi and installed as plugins.
  4. AIML: An Artificial Intelligence Markup Language definition for the Entity provides a way for users to create automated, reactive behaviors and simple natural language processing solutions.
  5. Listen to Humans: Check this box if the Entity should actively listen and perform speech recognition on the audio it hears from human trainees.
  6. Listen to Entities: Check this box if the Entity should actively listen and perform speech recognition on the audio it hears from other Entities. Note: Both boxes may be checked at the same time so that the Entity listens and performs speech recognition on the audio it hears from human trainees and other Entities.
  7. Enable Radio: Enables radio communications for this Entity. A net and domain must also be set (under the General tab) for this radio to work.
  8. Enable Vocal Range: Enables face-to-face communications for this Entity. Useful when this Entity has an avatar in a 3D game environment.
  9. Attributes: Add JSON-formatted attributes to this Entity for use in behaviors and parameterized speech text. For example, add an "altitude" attribute set to value "5000" and then have the Entity say, "Currently at $altitude feet".


Interactions prompt entities to take action and speak. The interaction can be a one-way broadcast or it can describe a back-and-forth conversation between multiple entities. Interactions are useful when creating an ongoing backdrop of radio chatter on a radio frequency. Alternatively, interactions may be triggered at a specified start time or based on other conditions, like a characters position in a 3D game environment. Each interaction lists a fixed sequence of speech events that will occur, separated by a specified amount of time.

Interaction Example

Let’s use a simple air traffic control situation as an example:

American 123: Austin tower, American one twenty three, runway three five right, ready for takeoff

Austin Tower: Cleared takeoff American one twenty three

To recreate this short exchange in Construct, create two entities and one interaction. The interaction should contain two actions, one for each radio transmission. Enter the text for each transmission and then check the Enabled box to start the Interaction immediately. By default the interaction will run once, but loop controls are available if the interaction should repeat after some interval. To restart the interaction once it has finished, toggle the Enabled option on the webpage.

Monitoring Interactions

As with entities, interactions are shown on the Construct status page along with any activity resulting from the running interactions.

Creating an Interaction

  1. Open the Interactions page.
  2. Click the + button to create a new Interaction.
  3. Click the Interaction's name to edit it.


  1. Name: Give the Interaction a descriptive name, such as 'IED Attack' or 'Takeoff Clearance'. The name is displayed on the Status page along with runtime information about the Interaction.

  2. Description: Optional. Add a description if desired.

  3. Enabled: Check this box to run an Interaction and prompt Entities to take action at the prescribed start time.

  4. Start Time

    • Type: Choose Relative or UTC.
      • Relative: if selected, the time is an offset from the beginning of the Scenario. A relative time of zero would mean that the interaction should start running immediately after the Scenario begins.
      • UTC: if selected, the time is a Unix time in seconds since January 1, 1970. This mode is used when the Interaction should begin at a specific date and time, such as March 3, 2013, 3:00 PM. If UTC mode is used, be sure to synchronize the Voisus server system clock with any other simulation systems. Network Time Protocol (NTP) may be used to achieve precise synchronization.
      • VBS2 Trigger: Starts the interaction based on the result of a VBS2 scripting command.
    • Time: The offset, in seconds, that determines when the interaction will begin. The meaning of this value depends on the type of time selected.
  5. Loop

    • Loop Forever: If checked, the Interaction will continuously loop for the duration of the Scenario. If unchecked, a specific Play Count is specified.
    • Play Count: Determines the number of times an Interaction runs. Often this is left as 1, and the Interaction will run exactly once.
    • Delay: Set the delay, in seconds, between Interaction loops. For example, if the Delay is 5 and Play Count is 2, the Interaction will run twice with a five second delay in between.

Note: If the Scenario is currently running and the Interaction loop or start time settings are changed, it may be necessary to toggle the Interaction Enabled state (using the checkbox) to restart the Interaction with the updated settings.


In this tab you will build a sequence of actions to describe the Interaction. Each action specifies an Entity, an action type, and a small number of action parameters. Simple radio interactions often consist of two actions: a call and a response. Complex interactions might include dozens of actions among several Entities.

  1. Action Delay: The delay, in seconds, between each action. It is used to quickly adjust the timing of all actions in a particular Interaction. For example, an Action Delay of 0.5 inserts half a second of silence between each radio call in the Interaction.

  2. Click the + button to create a new Action.

  3. Each Action consists of the following:

    • Entity: Choose the Entity that will act. Each action can control a different Entity.

    • Delay: Delay time in seconds before this action is executed. This value is added to the Action Delay setting to determine the actual delay amount.

    • Type: Choose TTS or Sound.

      • TTS: the Entity will speak using text-to-speech, with the speech audio transmitting on its radio or into the 3D Earshot environment.
      • Sound: the Entity will speak using a sound file instead.
    • Speech: Displays the text for text-to-speech or the name of the selected Sound object created in a Scenario.

    • Background: An optional sound or Sound object to be played in the background of the radio transmission. For example, engine noise can be added to radio transmissions from a vehicle.

Beyond Interactions

Interactions are relatively simple and limited in possibilities. If you wish to recreate more sophisticated reactions and decision making, use Behaviors, AIML, or the HAPI.


Sounds define segments of a sound file to be used by Construct along with settings including gain, an optional transcript, and loop parameters. Sounds reference sound files containing either recorded speech or sound effects like gunfire or cockpit noise. Once a Sound is defined in a Scenario, it can be referenced by multiple Interactions and Behaviors to be used for Entity speech or sound effects.

If you have speech recordings you would like to replay on the network as radio chatter, create a Sound for each unique radio transmission. If you are using an Interaction to sequence the radio chatter, add an action for each radio transmission, selecting the corresponding Sound in the action dropdown menu. Running the interaction will now replay your recordings onto the DIS network. Sounds used in this way are an alternative to using TTS to synthesize speech. For Sounds containing speech, it is useful to fill in the Transcript text field so that the corresponding text is shown on the Construct Status page when the Entity speaks.

When a Sound is created and its sound file is selected, the entire sound file will be used by default. The Offset and Length parameters specify a subsection of the sound file to use instead. This is useful if there is too much silence at the beginning or end of the file. Editing Sound parameters takes effect immediately. Note that Sounds are distinct from the sound files (.wav) they reference. Sound files are uploaded and managed on the Sound Files page, which contains a library of default sound files. Sound files can be referenced by Sounds in multiple scenarios if desired.

Creating a Sound

  1. Open the Sounds page using the button shown at the beginning of Section 6 (Sounds).
  2. Click the + button to create a new Sound.
  3. Click the Sound's name to edit it.
  4. Name: Type in a name for your sound that makes it identifiable.
  5. Gain: Gain applied to the sound during playback. "1" is the default and results in no volume change, while "0" silences the sound.
  6. Soundfile
    • File: Select the sound file to be used.
    • Offset: Location in the sound file to start playback. The default of "0" means start at the beginning of the file, while "16000" would start 16,000 samples into the file.
    • Length: Length in samples of the section to play. The default of "0" means play to the end of the file.
    • Transcript: For sounds containing speech to be transmitted from a radio, enter what is spoken in that file here. When Construct transmits the file, the speech transcript will display on the AMS pages.
  7. Loop
    • Loop Forever: If checked, the Sound will repeat indefinitely. If unchecked, the Play Count control determines the number of times the Sound plays.
    • Play Count: Enter a number as low as 1 to set the amount of times a sound file will loop.

Language Models

Language Models define the phraseology to be recognized and transcribed by the Construct Automatic Speech Recognition (ASR) system. Construct supports two types of speech recognition language models: Statistical Language Models (SLMs) and grammars. In Construct, each Entity has an optional Language Model selection, which should be filled in if that Entity should listen and respond to human speech. The type of speech the Entity is expected to encounter should determine the settings for its Language Model. For example, if the Entity should listen and respond to conversational English, a general English SLM should be used.

The grammar approach describes a strict syntax for the speech to be recognized using Backus Naur Form (BNF). Grammars are best suited for small domains with very constrained phraseology. Conversely, the more complex and variable the phraseology, the more likely it is that the statistical approach is the better choice.

With the SLM approach, the model is trained on thousands of transcriptions from the application domain and probabilistic methods help determine the most likely sequence of words for each new utterance. This approach is accepting of new word combinations that haven’t been seen before and hence is more suitable for large vocabulary tasks. SLMs are highly accurate when the training data is a good match to the real data.

ASTi has a number of models available if the Construct ASR package is enabled. Contact ASTi to determine whether a model is already available or can be created for your domain.

Consult the Construct API documentation for information on accessing speech recognition events from other systems on the network.

Enabling Speech Recognition

  1. Open the Language Model page and add a Language Model using the + button.

  2. Select a grammar or SLM model from the Speech Model drop-down menu.

    You may wish to start with package-16K-EN-120210, a general English SLM that recognizes conversational English spoken with an American accent.

  3. Select the Shared option if the Language Model should be shared by multiple entities.

    This is recommended for SLMs due to their consumption of notable amounts of system RAM. Depending on the amount of RAM available on the system, multiple entities using non-shared Language Models could starve the system and cause audio breakup or other issues.

  4. Open the Entities page and select or create the Entity that will use speech recognition.

  5. Select the Advanced tab.

  6. Select the Language Model from the drop-down menu.

  7. Ensure that the Listen checkbox is checked.

The Entity will automatically transcribe any received audio, producing a recognition event. The recognition text is shown on the Construct Status page and Construct Events monitoring page.

  • Note that entities do not listen to each other speak; they only listen to and perform speech recognition on audio transmitted by humans.
  • Whether an Entity responds is determined by the Entity’s Behavior, if one is selected, or by the simulation host computer if it is listening via the HTTP API.

Grammar Syntax

A grammar text area is displayed when an acoustic model is selected, allowing you to perform minor tasks for rote speech recognition. Edit the grammar in place on the webpage, or copy-paste the grammar contents from a file on your computer. A simple grammar to recognize one or more digits is as follows:

    <Digit> = (zero | one | two | three | four | five | six | seven | eight | nine);
    public <TopLevel> = <Digit>+;
Grammar Syntax Meaning
<Rule> = expression; Define a new grammar rule or nonterminal
hi | hello Choose between multiple options
() Parentheses create groups of items
+ Match the preceeding expression one or more times
* Match the preceeding expression zero or more times
[] Brackets make the expression within optional

Radio Effects

Radio Effects can be added to Entity radio transmissions to make them sound more realistic, with added distortion, noise, and other filtering effects. Any number of Radio Effects can be added to a Scenario, but each Entity may only use one at a time. A wide range of sounds is possible, from “clean”, to “gritty” or “noisy”, all by adjusting the handful of settings in the Radio Effect definition. A single Radio Effect may be reused by multiple Entities if desired. An Entity without a Radio Effect selected automatically gets the system default effects.

The following settings can be adjusted for each Radio Effect:

  • Bandlimiting
    • Highpass Freq: Highpass filter frequency in Hertz (Hz). Highpass filters block low frequencies and allow high frequency audio content to pass through.
    • Lowpass Freq: Lowpass filter frequency in Hertz (Hz). A highpass frequency of 1000 and a lowpass frequency of 3000 combine to create a “narrow” sounding voice transmission.
  • Noise
    • Color: White, Pink, or Brown. Determines the character of the noise added on top of the voice signal. White has the most high frequency content, pink is balanced, and brown has the most low frequency content.
    • Gain: Controls the volume of the added noise signal. A value of 0.0 would mean no additional noise is added, while values larger than 0, such as 0.1, would add noticeable noise.
  • Distortion
    • Mode: Off, Overdrive, or Distortion. Controls the character of the distortion effect applied to radio transmissions. Overdrive is useful for making the transmission sound more compressed, while Distortion adds more grit.
    • Input Gain: Higher values like 2.0, 4.0, and so on will introduce more distortion.
    • Mix Amount: Controls the level of distorted audio mixed in with the original audio, with 0.0 being only the original, 1.0 all distorted audio, and 0.5 being half and half.
  • Transmit Volume
    • Limiter Threshold: Threshold in decibels (dB) that constrains the loudness of the transmitted audio signal. Smaller (more negative) values such as -12, -24, -36, and so on will compress the signal into a smaller range. By applying a Makeup Gain next, the overall volume can be boosted back to normal but the audio will have a compressed, radio-like sound.
    • Makeup Gain: Final gain applied to the transmitted audio signal, used primarily to compensate for low Limiter Thresholds to create a compressed radio sound.

Construct Settings

The Settings webpage allows editing miscellaneous Construct parameters. Please note that these settings affect all Scenarios, not just the one currently being edited.

  • Discovery Machine Port: Network TCP port of a remote Discovery Machine behavior modeling instance. This supports external behavior modeling for Construct Entities. A value of 0 disables this feature.
  • VBS2 Server: Hostname or IP address of an associated VBS2 server. Construct behaviors and interactions will use this VBS2 server if it is specified. This feature is disabled if the field is left empty. ASTi's JRPC-VBS2 plugin must be installed for this to be functional.
  • VBS2 Port: Network TCP port opened by the ASTi JRPC-VBS2 plugin.
  • TTS Substitutions: Optional text substitutions to be performed on Entity transcripts. This can be useful to correct for mispronunciations and transform abbreviations into the corresponding spoken form. These substitutions will affect all Entities and all Scenarios. The Text field accepts Python-formatted regular expressions.

Artificial Intelligence Markup Language (AIML)

AIML describes voice responses and simple natural language understanding for Construct Entities. The XML-based AIML syntax features pattern matching, response templates, and the ability to store and retrieve variables. AIML input patterns are matched against voice messages received by the Entity in order to activate certain response templates. Response templates are able to modify Entity state variables and generate an automated voice response.

To create an AIML definition for an Entity, visit the AIML webpage, which is available under the Configure dropdown. Once an AIML definition is created it is reusable in any number of Scenarios on the Voisus Server. On the AIML webpage, click the plus button to create a new AIML definition, then edit it on the right by giving it a name and filling in the XML Definition section. The AIML definition is saved whenever the XML Definition is updated and the user clicks outside the text area. For large AIML definitions, it may be easiest to edit the file on your PC then copy-paste it onto the webpage. Once you have created an AIML definition, add it to a Construct Entity by selecting it on the Entity's Advanced tab. The Entity will use it when processing all following received voice messages from speech recognition or otherwise.

AIML is a useful tool for creating simple reactive Entities, but some applications may require more custom logic or more sophisticated natural language undertanding. When an application outgrows AIML, other behavior and natural language processing tools can be brought to bear, whether those are provided by ASTi or from another source and simply integrated via the Construct HTTP API.

AIML Complete Example

The AIML definition below matches phrases in the following formats:

  • "Contact wolverine on blue four"
  • "Radio check two one eight point five"
  • "Radio check"
  • Unknown phrases result in a "say again" response


Tag Definition
<?xml> Must be present as the first tag in an AIML definition
<aiml> AIML block delimiter; only one supported per file
<category> Knowledge unit containing one pattern and one response template
<pattern>PATTERN</pattern> Input pattern used to match received speech
<template> Template describing the response for an input pattern
<star index="N"/> Binds to the value of * for use in response templates
<srai>PATTERN</srai> Symbolic reduction operator for calling other categories
<set name="VAR">VALUE</set> Sets a variable to the specified value
<get name="VAR"/> Retrieves the value of a variable
<think> Hides the output of the computations within from the response

AIML Construct Integration

Construct adds functionality beyond the core AIML standard in order to more tightly integrate AIML with the state of the Entity and its natural language understanding. In particular, special meaning is given to AIML variables depending on capitalization.

  • Lowercase variables like name are synced between the Entity blackboard and the AIML state. This enables setting Entity attributes on the webpage or in behaviors then using those variables in speech generated from AIML. Similarly, when AIML sets one of these variables, the variable is set in the Entity blackboard.
  • Uppercase variables like Command set in AIML are used as key-value pairs in the speech recognition result meaning value. The meaning is then included in the speech recognition event sent to HTTP API clients for use in AI decision making.
  • Variables beginning with an underscore like _state are considered private to AIML.
  • The AIML response, if there is one, is immediately spoken by the Entity.

AIML Responses and Meaning Values

Received voice message:

contact wolverine on blue four

Matching AIML category:

<pattern>CONTACT * ON * </pattern>
Roger contacting <star index="1"/> on <star index="2"/>
<set name="Command">contact</set>
<set name="Callsign"><star index="1"/></set>
<set name="Frequency"><star index="2"/></set>

Resulting meaning value:

{ "Command": "contact", "Callsign": "wolverine", "Frequency": "blue four" }

Entity voice response:

Roger contacting wolverine on blue four


Behaviors enable Construct Entities to listen, speak, and otherwise act autonomously in the simulation environment. These Behaviors consist of a hierarchical, tree-like structure of nodes with the purpose of breaking complex high-level tasks down into smaller subtasks and eventually into individual actions for the Entity to execute. Although Behaviors can be built to automate many types of tasks, with Construct the focus is most commonly on reproducing the speech patterns and radio communication protocols of real world agents.

A few examples of actions available in Construct Behaviors include:

  • Listen for a radio transmission containing certain keywords (like a callsign)
  • Speak and transmit on a radio or face-to-face in a 3D environment
  • Execute a scripting command in attached game environments like Virtual Battlespace
  • Wait on a condition or event, then speak

Depending on the application, Behaviors are built to run for the duration of the Scenario or they can complete execution and exit after certain tasks are finished.

Behavior Workflow

  1. Navigate to the Behaviors webpage to create, modify, and delete Behaviors
  2. Create a new Behavior by clicking the + button
  3. Add a root node to the Behavior by clicking one of the node type dropdown buttons and then selecting a specific node type
  4. Continue adding more nodes to the Behavior by selecting the radio button on an existing node then using the node type menus to add children nodes
  5. Show/hide configuration settings for each node by clicking the toggle button with an eye icon
  6. Add the Behavior to an Entity by navigating to the Entities webpage then selecting the Behavior in the dropdown on the Advanced tab
  7. The Behavior starts executing immediately if the Scenario is running
  8. Changes made to Behaviors do not take effect in realtime. The Scenario must be reinstalled or the Behavior must be deselected then selected in the Entity dropdown

Understanding Behavior Execution

Once a Behavior is added to a running Construct Entity it executes several times a second until it completes or the Scenario stops. Behaviors are restarted by deselecting then selecting the Behavior in the Entity dropdown or restarting the Scenario.

The execution of behaviors is measured in ticks, which occur several times a second. With each tick, the root node executes one step of logic, which in turn may tick some or all of the descendant nodes, depending on the structure of the tree and the individual node types. Each type of node has its own strategy or style of execution that determines how long it executes and what processing takes place during each tick. For example, a Repeat Always node always executes its child node once per tick and never completes execution itself. On the other hand, an extremely simple Behavior might consist only of a single action node that completes execution in a single tick.

When a node finishes execution it returns success or failure. This status value may then be used by the parent node when deciding how to carry on. In some cases, a failure may mean the entire Behavior should fail and stop executing immediately. In other cases, the Behavior should try again and succeed once the subtask succeeds, no matter how many attempts it takes.

The Behavior Viewer webpage shows behavior execution and supports inserting breakpoints on nodes to suspend behavior execution for analysis. This tool becomes especially useful when building and debugging large behaviors. This webpage is accessible from a link on the Behavior builder webpage.

Behavior Node Types

Behavior node types and their parameters are described below. Many Behaviors in Construct focus on listening for keywords using speech recognition, then generating a corresponding voice response.

Listen and Say are two types of action nodes, which cause the Entity to actually take some specific action in the environment.

Composite Nodes

Node Type Function
Selector Runs its children in sequence until one succeeds. Returns success as soon as a single child succeeds.
Sequence Runs its children in sequence until one fails. Returns success only if all children succeed.
Parallel Runs its children in parallel, with the return value determined by the Policy setting. If Policy is require one then success is returned when any child succeeds. If require all is set, then success is returned only when all children have succeeded. Parallel execution of children here means that for each tick of the Parallel node, each of its child nodes run one tick.

Repeat Nodes

Node Type Function
Repeat-Always Repeatedly runs its child to completion regardless of the child's success or failure. This node type never returns.
Repeat-Until-Succeed Similar to Repeat-Always, but returns success when its child succeeds.
Repeat-Until-Fail Similar to the above, but returns success when its child fails.

Action Nodes

Node Type Function
Say Speak text using TTS or a Sound if one is selected. Supports text substitution using variables whose names are prefixed with "$". For example, "Roger, this is $callsign" returns success if the speech completes, or fails if speech variables couldn’t be resolved. Upon the speech finishing, a variable _lastSaid is set on the Entity Blackboard containing the text of what was spoken, or the Sound ID if a Sound was played. When the node is about to succeed, three variables are temporarily set on the Entity Blackboard: rec_text, rec_meaning, and rec_conf. The Expression is then evaluated, which may read these values and save them elsewhere, then the three variables are cleared from the Blackboard.
Listen Blocks until a voice message is received, at which time it succeeds or fails based on whether the specified Keywords and Require conditions are matched. Keywords should be a comma-separated list of keywords that should be present in the message text. Require is a Behavior Expression (explained below) that is evaluated to True or False.
Assert Succeeds or fails based on the evaluation of the specified Expression.
Assert-Position Succeeds if the Entity is currently within Distance meters of the point specified by the X, Y, and Z coordinates.
VBS Command Executes a scripting command on an attached VBS instance, if one exists. Returns success if the command exeutes or fails if VBS isn't connected.
Event Waits until a specific named event occurs, at which time success is returned. Fails if a different event is raised first.
Wait Waits Time seconds and then succeeds.
Wait-for-Silence Waits for radio silence and succeeds when silence is detected or fails if Timeout seconds elapses first.
Expression Evaluates the Expression against the Entity blackboard.

Utility Nodes

Utility nodes support exactly one child node and modify the child's execution or return value in some way.

Node Type Function
Timeout Executes the child until it returns, or until Time has elapsed. The FinalValue parameter determines the success or failure of this node upon timeout.
Limit Limits the execution rate (tick rate) of its child to a specified interval in seconds.
Flip Runs its child until completion, but returns an inverted success or failure result.
Exc-Handler Catches and suppresses exceptions raised when executing its child. Returns the success or failure value from the child, unless an exception is raised, in which case it always fails.

Behavior Expressions

Several node types evaluate expressions during execution as a means to modify the Entity and influence behavior execution. Expressions are entered by the Behavior developer in a simple syntax that should look familiar to programmers that have used imperative programming languages like C or Python.

All expressions are evaluated against the Entity's blackboard, which is a simple storage area for each Entity's data. For example, the expression count = 1 sets a variable count to the value 1 in the current Entity. Blackboard variables can then be used as variables in speech or be referenced for other purposes by other nodes in the Behavior.

In addition to being able to read and write arbitrary variables on the Entity blackboard, expressions can access the Entity’s webpage attributes through an implicit blackboard variable named entity. For example, to change the runtime Entity's name, use the expression = "John Oliver".


  • May contain multiple statements separated by semicolons
  • Support creating and manipulating data in the JSON format
  • Are executed by a simple interpreter that does not support complex statements
  • Retrieve and set variables on the Entity blackboard

Example expressions that demonstrate the available operators and functions:

Expression Note
count = 1 Number assignment
count += 2 Number increment
count -= 2 Number decrement
count == 3.3 Number equality
count != 3.3 Number inequality
x = count * 2 Number multiplication
name = "Striker 7" String assignment
name == "Striker 6" String equality
name != "Striker 6" String inequality
"three two" in text String contains
times = [1,2,3] List assignment
times.append(5) List append
times.remove(5) List remove
time = times[1] List access
flight = { "number": 42 } Object assignment
flight["number"] = 43 Object item assignment
flight["number"] Object item access
x || y Logical OR
x && y Logical AND
tmp = count; count = 0 Chaining multiple statements
a=1; b=2; c=a+b; d=c*55; Chaining mulitple statements; accessing variables