This is a Jitsu.Classic documentation. For the lastest version, please visit Read about differences here.

πŸ“œ Configuration

Configuration UI

πŸ‘©β€πŸ”¬ Extending Jitsu

Destination Extensions
Source Extensions
API Specs

Jitsu Internals

Jitsu Server

Schema and Mapping

This page documents deprecated feature. It may be removed in the future

Please use JavaScript Transformation instead.

Jitsu is a "schema-less" by design. Event with any JSON inside can be processed. Simultaneously, explicit SQL types can be configured in the mapping section. Explicit SQL types will override types determined from JSON values. They will be used in creating/patching Data Warehouse tables and in cast functions on insert operations. See more about types recognizing and typecasting in Typecast section.


A special section in the destination configuration is designed to define how JSON is transformed before it's sent to the target. Currently, four mapping actions are supported: move, remove, cast, and constant:

        keep_unmapped: true # if fields that are not explicitly mapped should be kept or removed
          - src: /src/field/path # JSON path
            dst: /dst/field/path # could be just_field_name, without leading. Before inserting all / (except
            # first one) will be replaced wth '_'
            action: move | remove | cast | constant
            type: varchar(256) # for 'move' (optional) and 'cast' (required) actions - SQL type (depend on destination)
            column_type: varchar(256) encode zstd # Optional. SQL Column DDL on create/patch tables. Default value is 'type' variable value from this object.
            value: # Value for setting as constant to 'dst'. Required only for 'constant' action. Other actions will ignore this field.
Parameter Description

if true - all not mapped fields will be added to the result JSON as-is.

if false - only fields that were mentioned in mapping rules with actions move will be added to the result JSON. Default value: true

fieldsAn array of mapping objects
fields[N].srcSlash separated source JSON node path.
fields[N].dstSlash separated or final destination JSON node path

Currently, move, remove, cast, and constant are supported.

move - get value with src JSON path and put it to dst JSON path.

remove - remove value from src JSON path

cast - take into account SQL type from type field and apply it to dst JSON path node in SQL statements (creates/updated tables and inserts with src field)

constant - put the value from value field into dst JSON path node


Data Warehouse specific SQL type which will be applied to dst JSON path field. Can be used with move, cast, and constant actions.


Data Warehouse specific SQL column DDL which will be used in column creation statement. It is used with fields[N].type in pair. For instance Redshift column encoding might be provided here - column_type: varchar(256) encode zstd . Optional. Default value is fields[N].type value. Can be used with move, and cast actions.


A constant value that will be set into dst JSON path in result object. Can be used only with constant action


        keep_unmapped: false #only fields from configured ruless will be in the result
          - src: /employee/id
            dst: /id
            action: move
          - src: /employee/start_at
            dst: /working_period_start
            action: move
            type: timestamp #SQL type
          - src: /employee/salary
            action: remove
          - dst: /postal_code
            action: constant
            type: text #SQL type
            value: 90210
          - dst: /taxes_sum
            action: cast
            type: numeric(38,18) #SQL type

Input JSON object:

      "id": 19318412,
      "start_at": "2018-12-10 10:06:18.509996",
      "salary": 50000.00,
      "personal_data": { "address": "...", "phone_number": "..." },
  "taxes_sum": "892.32",

Result object with SQL types:

  "id": 19318412,
  "working_period_start": "2018-12-10 10:06:18.509996" #::timestamp,
  "postal_code": 90210 #::text,
  "taxes_sum": "892.32" #::numeric(38,18)

For configuring Segment like schema please see Segment Compatibility section