The ZTPServer uses a series of YAML files to provide its various configuration and databases. Use of the YAML format makes the files easier to read and makes it easier and more intuitive to add/update entries (as opposed to other files formats such as JSON, or binary formats such as SQL).

The ZTPServer components are housed in a single directory defined by the data_root variable in the global configuration file. The directory location will vary depending on the configuration in /etc/ztpserver/ztperserver.conf.

The following directory structure is normally used:


All configuration files can be validated using:

(bash)# ztps --validate

Global configuration file

The global ZTPServer configuration file can be found at /etc/ztpserver/ztpserver.conf. It uses the INI format (for details, see top section of Python configparser).

An alternative location for the global configuration file may be specified by using the --conf command line option:


(bash)# ztps --help
usage: ztpserver [options]

optional arguments:
  -h, --help            show this help message and exit
  --version, -v         Displays the version information
  **--conf CONF, -c CONF  Specifies the configuration file to use**
  --validate-config, -V
                        Validates config files
  --debug               Enables debug output to the STDOUT
  --clear-resources, -r
                        Clears all resource files
(bash)# ztps --conf /var/ztps.conf

If the global configuration file is updated, the server must be restarted in order to pick up the new configuration.


# Location of all ztps boostrap process data files
# default= /usr/share/ztpserver

# UID used in the /nodes structure
# default=serialnum
identifier=<serialnum | systemmac>

# Server URL to-be-advertised to clients (via POST replies) during the bootstrap process
# default=http://ztpserver:8080

# Enable local logging
# default=True
logging=<True | False>

# Enable console logging
# default=True
console_logging=<True | False>

# Console logging format
# default=%(asctime)-15s:%(levelname)s:[%(module)s:%(lineno)d] %(message)s
console_logging_format=<(Python)logging format>

# Globally disable topology validation in the bootstrap process
# default=False
disable_topology_validation=<True | False>

# Note: this section only applies to using the standalone server.  If
# running under a WSGI server, these values are ignored

# Interface to which the server will bind to (0:0:0:0 will bind to
# all available IPv4 addresses on the local machine)
# default=
interface=<IP addr>

# TCP listening port
# default=8080
port=<TCP port>

# Bootstrap filename (file located in <data_root>/bootstrap)
# default=bootstrap

# Neighbordb filename (file located in <data_root>)
# default=neighbordb


Configuration values may be overridden by setting environment variables, if the configuration attribute supports it. This is mainly used for testing and should not be used in production deployments.

Configuration values that support environment overrides use the environ keyword, as shown below:


In the above example, the data_root value is normally configured in the [default] section as data_root; however, if the environment variable ZTPS_DEFAULT_DATAROOT is defined, it will take precedence.

Bootstrap configuration

[data_root]/bootstrap/ contains files that control the bootstrap process of a node.

  • bootstrap is the base bootstrap script which is going to be served to all clients in order to control the bootstrap process. Before serving the script to the clients, the server replaces any references to $SERVER with the value of server_url in the global configuration file.

  • bootstrap.conf is a configuration file which defines the local logging configuration on the nodes (during the bootstrap process). The file is loaded on on-demand.


        destination: ""
        level: DEBUG
      - destination: file:/tmp/ztps-log
        level: DEBUG
      - destination: ztps-server:1234
        level: CRITICAL
      - destination:
        level: CRITICAL
      username: bootstrap
      password: eosplus
        - ztps
        - ztps-room2


In order for XMPP logging to work, a non-EOS user need to be connected to the room specified in bootstrap.conf, before the ZTP process starts. The room has to be created (by the non-EOS user) before the bootstrap client starts logging the ZTP process via XMPP.

Static provisioning - overview

A node can be statically configured on the server as follows:

  • create a new directory under [data_root]/nodes, using the system’s unique_id as the name
  • create/symlink a startup-config or definition file in the newly-created folder
  • if topology validation is enabled, also create/symlink a pattern file
  • optionally, create config-handler script which is run whenever a PUT startup-config request succeeds

Static provisioning - startup_config

startup-config provides a static startup-configuration for the node. If this file is present in a node’s folder, when the node sends a GET request to /nodes/<unique_id>, the server will respond with a static definition that includes:

  • a replace_config action which will install the configuration file on the switch (see actions section below for more on this). This action will be placed first in the definition.
  • all the actions from the local definition file (see definition section below for more on this) which have the always_execute attribute set to True

Static provisioning - definition

The definition file contains the set of actions which are going to be performed during the bootstrap process for a node. The definition file can be either: manually created OR auto-generated by the server when the node matches one of the patterns in neighbordb (in this case the definition file is generated based on the definition file associated with the matching pattern in neighbordb).

name: <system name>

    action: <action name>

    attributes:                     # attributes at action scope
        always_execute: True        # optional, default False
        <key>: <value>
        <key>: <value>

    onstart:   <msg>                # message to log before action is executed
    onsuccess: <msg>                # message to log if action execution succeeds
    onfailure: <msg>                # message to log if action execution fails

attributes:                         # attributes at global scope
    <key>: <value>
    <key>: <value>
    <key>: <value>

Static provisioning - attributes

Attributes are either key/value pairs, key/dictionary pairs, key/list pairs or key/reference pairs. They are all sent to the client in order to be passed in as arguments to actions.

Here are a few examples:

  • key/value:

        my_attribute : my_value
  • key/dictionary

            key1: value1
            key2: value2
  • key/list:

        - my_value1
        - my_value2
        - my_valueN
  • key/reference:

        my_attribute : $my_other_attribute

key/reference attributes are identified by the fact that the value starts with the ‘$’ sign, followed by the name of another attribute. They are evaluated before being sent to the client.


    my_other_attribute: dummy
    my_attribute : $my_other_attribute

will be evaluated to:

    my_other_attribute: dummy
    my_attribute : dummy

If a reference points to a non-existing attribute, then the variable substitution will result in a value of None.


Only one level of indirection is allowed - if multiple levels of indirection are used, then the data sent to the client will contain unevaluated key/reference pairs in the attributes list (which might lead to failures or unexpected results in the client).

The values of the attributes can be either strings, numbers, lists, dictionaries, or references to other attributes or functions.

The supported functions are:

  • allocate(resource_pool) - allocatea an available resource from a resource pool; the allocation is perform on the server side and the result of the allocation is passed to the client via the definition


Functions can only be used with strings as arguments, currently. See section on add_config action for examples.

Attributes can be defined in three places:

  • in the definition, at action scope
  • in the definition, at global scope
  • in the node’s attributes file (see below)

attributes is a file which can be used in order to store attributes associated with the node’s definition. This is especially useful whenever multiple nodes share the same definition - in that case, instead of having to edit each node’s definition in order to add the attributes (at the global or action scope), all nodes can share the same definition (which might be symlinked to their individual node folder) and the user only has to create the attributes file for each node. The attributes file should be a valid key/value YAML file.

<key>: <value>
<key>: <value>

For key/value, key/list and and key/reference attributes, in case of conflicts between the three scopes, the following order of precidence rules are applied to determine the final value to send to the client:

  1. action scope in the definition takes precedence
  2. attributes file comes next
  3. global scope in the definition comes last

For key/dict attributes, in case of conflicts between the scopes, the dictionaries are merged. In the event of dictionary key conflicts, the same precidence rules from above apply.

Static provisioning - pattern

The pattern file a way to validate the node’s topology during the bootstrap process (if topology validation is enabled). The pattern file can be either:

  • manually created
  • auto-generated by the server, when the node matches one of the patterns in neighbordb (the pattern that is matched in neighbordb is, then, written to this file and used for topology validation in subsequent re-runs of the bootstrap process)

The format of a pattern is very similar to the format of neighordb (see neighbordb section below):

    <variable_name>: <function>

name: <single line description of pattern>               # optional
    - <port_name>:<system_name>:<neighbor_port_name>
    - <port_name>:
        device: <system_name>
        port: <neighbor_port_name>

If the pattern file is missing when the node makes a GET request for its definition, the server will log a message and return either:

  • 400 (BAD_REQUEST) if topology validation is enabled
  • 200 (OK) if topology validation is disabled

If topology validation is enabled globally, the following patterns can be used in order to disable it for a particular node:

  • match any node which has at least one LLDP-capable neighbor:
name: <pattern name>
    - any: any:any


  • match any node which has no LLDP-capable neighbors:
name: <pattern name>
    - none: none:none

Static provisioning - config-handler

The config-handler file can be any script which can be executed on the server. The script will be executed every time a PUT startup-config request succeeds for the node.

The script can be used for raising alarms, performing checks, submitting the startup-config file to a revision control system, etc.

Static provisioning - log

The .node file contains a cached copy of the node’s details that were received during the POST request the node makes to /nodes (URI). This cache is used to validate the node’s neighbors against the pattern file, if topology validation is enabled (during the GET request the node makes in order to retrieve its definition).

The .node is created automatically by the server and should not be edited manually.

Example .node file:

{"neighbors": {"Management1": [{"device": "",
                                "port": "0050.569b.9ba5"}
               "Ethernet2": [{"device": "veos-dc1-pod1-spine1",
                                "port": "0050.569a.9321"}
 "model": "vEOS",
 "version": "4.13.7M",
 "systemmac": "005056b863ac"

Dynamic provisioning - overview

A node can be dynamically provisioned by creating a matching neighbordb ([data_root]/neighbordb) entry which maps to a definition. The entry can potentially match multiple nodes. The associated definition should be created in [data_root]/definitions/.

Dynamic provisioning - neighbordb

The neighbordb YAML file defines mappings between patterns and definitions. If a node is not already configured via a static entry, then the node’s topology details are attempted to be matched against the patterns in neighbordb. If a match is successful, then a node definition will be automatically generated for the node (based on the mapping in neighbordb).

There are 2 types of patterns supported in neighbordb: node-specific (containing the node attribute, which refers to the unique_id of the node) and global patterns.


  • if multiple node-specific entries reference the same unique_id, only the first will be in effect - all others will be ignored
  • if both the node and interfaces attributes are specified and a node’s unique_id is a match, but the topology information is not, then the overall match will fail and the global patterns will not be considered
  • if there is no matching node-specific pattern for a node’s unique_id, then the server will attempt to match the node against the global patterns (in the order they are specified in neighbordb)
  • if a node-specific pattern matches, the server will automatically generate an open pattern in the node’s folder. This pattern will match any device with at least one LLDP-capable neighbor. Example: any: any:any
    variable_name: function
    - name: <single line description of pattern>
      definition: <defintion_url>
      node: <unique_id>
      config-handler: <config-handler>
        <variable_name>: <function>
        - <port_name>: <system_name>:<neighbor_port_name>
        - <port_name>:
            device: <system_name>
            port: <neighbor_port_name>


Mandatory attributes: name, definition, and either node, interfaces or both.

Optional attributes: variables, config-handler.


The variables can be used to match the remote device and/or port name (<system_name>, <neighbor_port_name> above) for a neighbor. The supported values are:

same as exact(string) from below
exact (pattern)
defines a pattern that must be matched exactly (Note: this is the default function if another function is not specified)
regex (pattern)
defines a regex pattern to match the node name against
includes (string)
defines a string that must be present in system/port name
excludes (string)
defines a string that must not be present in system/port name

node: unique_id

Serial number or MAC address, depending on the global ‘identifier’ attribute in ztpserver.conf.

interfaces: port_name

Local interface name - supported values:

  • Any interface
    • any
  • No interface
    • none
  • Explicit interface
    • Ethernet1
    • Ethernet2/4
    • Management1
  • Interface list/range
    • Ethernet1-2
    • Ethernet1,3
    • Ethernet1-2,3/4
    • Ethernet1-2,4
    • Ethernet1-2,4,6
    • Ethernet1-2,4,6,8-9
    • Ethernet4,6,8-9
    • Ethernet10-20
    • Ethernet1/3-2/4 *
    • Ethernet3-$ *
    • Ethernet1/10-$ *
  • All Interfaces on a Module
    • Ethernet1/$ *


* Planned for future releases.


Remote system and interface name - supported values (STRING = any string which does not contain any white spaces):

  • any: interface is connected
  • none: interface is NOT connected
  • <STRING>:<STRING>: interface is connected to specific device/interface
  • <STRING> (Note: if only the device is configured, then ‘any’ is implied for the interface. This is equal to <DEVICE>:any): interface is connected to device
  • <DEVICE>:any: interface is connected to device
  • <DEVICE>:none: interface is NOT connected to device (might be connected or not to some other device)
  • $<VARIABLE>:<STRING>: interface is connected to specific device/interface
  • <STRING>:<$VARIABLE>: interface is connected to specific device/interface
  • $<VARIABLE>:<$VARIABLE>: interface is connected to specific device/interface
  • $<VARIABLE> (‘any’ is implied for the interface. This is equal to $<VARIABLE>:any): interface is connected to device
  • $<VARIABLE>:any: interface is connected to device
  • $<VARIABLE>:none: interface is NOT connected to device (might be connected or not to some other device)

port_name: system_name:neighbor_port_name

Negative constraints

  1. any: DEVICE:none: no port is connected to DEVICE
  2. none: DEVICE:any: same as above
  3. none: DEVICE:none: same as above
  4. none: any:PORT: no device is connected to PORT on any device
  5. none: DEVICE:PORT: no device is connected to DEVICE:PORT
  6. INTERFACES: any:none: interfaces not connected
  7. INTERFACES: none:any: same as above
  8. INTERFACES: none:none: same as above
  9. INTERFACES: none:PORT: interfaces not connected to PORT on any device
  10. INTERFACES: DEVICE:none: interfaces not connected to DEVICE
  11. any: any:none: bogus, will prevent pattern from matching anything
  12. any: none:none: bogus, will prevent pattern from matching anything
  13. any: none:any: bogus, will prevent pattern from matching anything
  14. any: none:PORT: bogus, will prevent pattern from matching anything
  15. none: any:any: bogus, will prevent pattern from matching anything
  16. none: any:none: bogus, will prevent pattern from matching anything
  17. none: none:any: bogus, will prevent pattern from matching anything
  18. none: none:none: bogus, will prevent pattern from matching anything
  19. none: none:PORT: bogus, will prevent pattern from matching anything

Positive constraints

  1. any: any:any: “Open pattern” matches anything
  2. any: any:PORT: matches any interface connected to any device’s PORT
  3. any: DEVICE:any: matches any interface connected to DEVICE
  4. any: DEVICE:PORT: matches any interface connected to DEVICE:PORT
  5. INTERFACES: any:any: matches if local interfaces is one of INTERFACES
  6. INTERFACES: any:PORT: matches if one of INTERFACES is connected to any device’s PORT
  7. INTERFACES: DEVICE:any: matches if one of INTERFACES is connected to DEVICE
  8. INTERFACES: DEVICE:PORT: matches if one of INTERFACES is connected to DEVICE:PORT


[data_root]/definitions/ contains a set of shared definition files which can be associated with patterns in neighbordb (see the Dynamic provisioning - neighbordb section below) or added to/symlink-ed from nodes’ folders.

See Static provisioning - definition for more.


[data_root]/actions/ contains the set of all actions available for use in definitions.

Action Description Required Attributes
add_config Adds a block of configuration to the final startup-config file url
copy_file Copies a file from the server to the destination node src_url, dst_url, overwrite, mode
install_cli_plugin Installs a new EOS CLI plugin and configures rc.eos url
install_extension Installs a new EOS extension extension_url, autoload, force
install_image Validates and installs a specific version of EOS url, version
replace_config Sends an entire startup-config to the node (overrides (overrides add_config) url
send_email Sends an email to a set of recipients routed through a relay host. Can include file attachments smarthost, sender, receivers, subject, body, attachments, commands
run_bash_script Run bash script during bootstrap. url
run_cli_commands Run CLI commands during bootstrap. url

Additional details on each action are available in the Actions module docs.


Assume that we have a block of configuration that adds a list of NTP servers to the startup configuration. The action would be constructed as such:

    - name: configure NTP
      action: add_config
        url: /files/templates/ntp.template

The above action would reference the ntp.template file which would contain configuration commands to configure NTP. The template file could look like the following:

ntp server
ntp server
ntp server
ntp server

When this action is called, the configuration snippet above will be appended to the startup-config file.

The configuration templates can also contains variables, which are automatically substituted during the action’s execution. A variable is marked in the template via the ‘$’ symbol.

e.g. Let’s assume a need for a more generalized template that only needs node specific values changed (such as a hostname and management IP address). In this case, we’ll build an action that allows for variable substitution as follows.

    - name: configure system
      action: add_config
        url: /files/templates/system.template
            hostname: veos01

The corresponding template file system.template will provide the configuration block:

hostname $hostname
interface Management1
    description OOB interface
    ip address $ipaddress
    no shutdown

This will result in the following configuration being added to the startup-config:

hostname veos01
interface Management1
    description OOB interface
    ip address
    no shutdown

Note that in each of the examples, above, the template files are just standard EOS configuration blocks.

Resource pools

[data_root]/resources/ contains global resource pools from which attributes in definitions can be allocated via the allocate(...) function.

The resource pools provide a way to dynamically allocate a resource to a node when the node definition is created. The resource pools are key/value YAML files that contain a set of resources to be allocated to a node (whenever the allocate(...) function is used in the definition).

<value1>: <"null"|node_identifier>
<value2>: <"null"|node_identifier>

In the example below, a resource pool contains a series of 8 IP addresses to be allocated. Entries which are not yet allocated to a node are marked using the null descriptor. null null null null null null null null

When a resource is allocated to a node’s definition, the first available null value will be replaced by the node’s unique_id. Here is an example: 001c731a2b3c null null null null null null null

On subsequent attempts to allocate the resource to the same node, ZTPS will first check to see whether the node has already been allocated a resource from the pool. If it has, it will reuse the resource instead of allocating a new one.

In order to free a resource from a pool, simply turn the value associated to it back to null, by editing the resource file.

Alternatively, $ztps --clear-resources can be used in order to free all resources in all resource files.


[data_root]/config-handlers/ contains config-handlers which can be associated with nodes via neighbordb. A config-handler script is executed every time a PUT startup-config request succeeds for a node which is associated to it.

Other files

[data_root]/files/ contains the files that actions might request from the server. For example, [data_root]/files/images/ could contain all EOS SWI files.