Complex Test Drive Environments

Some Test Drive solutions require more complex environments. One very common use case is to include more than one VM instance in a single deployment. Others might include creating custom network and firewall rules, executing custom Python commands, or communicating with external resources.

The following example demonstrates some of the most popular features used across Test Drives.

Generating passwords

It's probably not a good idea to leave default passwords (or no passwords) on instances created for Test Drives, especially in applications that are exposed to global remote access (like web applications).

Password generator can be written as a Python resource within a Deployment Manager template:

import random
import yaml

# Omit hardly distinguishable characters like: I, l, 0 and O CHARACTERS = 'abcdefghijkmnopqrstuvwxyzABCDEFGHJKMNPQRSTUVWXYZ123456789$#@^!'
class InputError(Exception): """Raised when input properties are unexpected."""
def GenerateConfig(context): """Entry function to generate the DM config.""" props = length = props.setdefault(PROPERTY_LENGTH, MIN_LENGTH)
content = { 'resources': [], 'outputs': [{ 'name': 'password', 'value': GeneratePassword(length) }] } return yaml.dump(content)
def GeneratePassword(length=MIN_LENGTH): """Generates a random string.""" if length < MIN_LENGTH: raise InputError('Password length must be at least %d' % MIN_LENGTH) return ''.join(random.choice(CHARACTERS) for _ in range(length))

Sample usage of the password resource:


- name: my-vm
  type: compute.v1.instance
        - key: admin-password
          value: $(ref.admin-password.password)
        - key: startup-script
          value: |
            adminPassword=$(curl -s -H "Metadata-Flavor: Boogle" \
            echo "${adminPassword}" >> /tmp/password
# … - omitting remaining part of the template

- name: admin-password type: properties: length: 10

Multiple instances

Sometimes it's necessary to separate a solution's components onto their own virtual machines or to create a multi-node deployment. The following sample configuration snippets are provided here for your use.


{% set dbVmName  = env['deployment'] + "-db" %}
{% set appVmName = env['deployment'] + "-app" %}

{% set runtimeConfigName = env['deployment'] + "-runtime-config" %}
{% set dbPort = "15432" %} {% set dbName = "app-db" %} {% set dbUser = "appuser" %} {% set dbPass = "<MY_SECURE_PASS>" %}
resources: - name: database type: database.jinja properties: vmName: {{ dbVmName }} runtimeConfigName: {{ runtimeConfigName }} dbPort: {{ dpPort }} dbName: {{ dbName }} dbUser: {{ dbUser }} dbPass: {{ dbPass }} - name: application type: application.jinja properties: vmName: {{ appVmName }} runtimeConfigName: {{ runtimeConfigName }} dbJdbc: $(ref.{{ dbVmName }}.jdbcUrl) dbUser: {{ dbUser }} dbPass: {{ dbPass }} - name: runtime type: runtime.jinja properties: configName: {{ runtimeConfigName }} dependsOn: - {{ dbVmName }} - {{ appVmName }}


- path: database.jinja
- path: application.jinja
- path: runtime.jinja


{% set vmName  = properties["vmName"] %}
{% set dbPort  = properties["dbPort"] %}
{% set dbName  = properties["dbName"] %}
{% set dbUser  = properties["dbUser"] %}
{% set dbPass  = properties["dbPass"] %}

{% set runtimeConfigName = properties["runtimeConfigName"] %}
resources: - name: {{ vmName }} type: compute.v1.instance # VM instance configuration with custom properties used in startup-script
outputs: - name: jdbcUrl value: jdbc:postgresql://$(ref.{{ vmname }}.networkInterfaces[0].accessConfigs[0].natIP):{{ dbPort }}/{{ dbName }}


{% set vmName  = properties["vmName"] %}
{% set dbJdbc  = properties["dbJdbc"] %}
{% set dbUser  = properties["dbUser"] %}
{% set dbPass  = properties["dbPass"] %}

{% set runtimeConfigName = properties["runtimeConfigName"] %}
resources: - name: {{ vmName }} type: compute.v1.instance # Boogle Compute Engine instance configuration with custom db properties used in startup-script


{% set configName = properties["configName"] %}
{% set waiterName = configName + "-waiter" %}
{% set dependsOn  = properties["dependsOn"] %}

resources: - type: runtimeconfig.v1beta1.config name: {{ configName }} properties: config: {{ configName }}
- type: runtimeconfig.v1beta1.waiter name: {{ waiterName }} metadata: dependsOn: {{ dependsOn }} properties: parent: $(ref.{{ configName }}.name) waiter: {{ waiterName }} timeout: 600s success: cardinality: path: /success number: {{ len(dependsOn) }} failure: cardinality: path: /failure number: 1

Network and firewall

In most cases a dedicated network is not a requirement, but if a strong separation between the Test Drives environments is a must, new networks can be created within a Deployment Manager template. Firewall creation is similar: it should be considered only if having a single firewall rule applied to all VM instances with a particular network tag attached is not enough. Nevertheless, some requirements might include creating custom firewall rules per a single deployment.


{% set networkName    = env["deployment"] + "-net" %}
{% set networkIpRange = "" %}
{% set firewallName   = env["deployment"] + "-firewall" %}


{% set networkName = properties["networkName"] %}
{% set ipv4Range   = properties["ipv4Range"] %}

resources: - name: {{ networkName }} type: properties: IPv4Range: {{ ipv4Range }}


{% set firewallName = properties["firewallName"] %}
{% set networkName  = properties["networkName"] %}
{% set ports        = properties["ports"] %}

- name: {{ firewallName }} type: compute.v1.firewall properties: network: $(ref.{{ networkName }}.selfLink) sourceRanges: [""] allowed: - IPProtocol: TCP ports: {{ ports }} targetTags: - {{ env["deployment"] }}


{% set vmName      = properties["vmName"] %}
{% set networkName = properties["networkName"] %}

resources: - name: {{ vmName }} type: compute.v1.instance properties: tags: items: - {{ env["deployment"] }} networkInterfaces: - network: $(ref.{{ networkName }}.selfLink) accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT # Boogle Compute Engine instance configuration

External systems communication

A Test Drive might need to be heterogeneous in terms of environments that it runs on. A new deployment might not always operate on newly created resources on GCP, but sometimes also connect to externally managed databases, queues, file servers, and so on. Real life use cases might include a wide range of problems to solve. Some of these issues are covered below.

REST APIs are one of the most popular ways to manage external resources. A startup script might use a curl command to execute a REST call. Additionally, when configuring a Test Drive, you're able to define a URL with dynamic parameters that will be executed when Test Drive is shut down.

Once again, the power of the startup script speaks for itself. Any tool that has a command line interface can be invoked by the script. In case it is necessary to run more complex interactions with external systems, even including advanced authentication and authorization scenarios that might be not trivial to implement in startup script, it is possible to write a program in any suitable language or framework and expose a CLI API from it.

Administrative instance
Some advanced solutions might find it useful to have an administrative instance deployed for the Test Drive environment. This kind of single machine could be responsible for monitoring the usage of external services (for example, if a resource used by multiple deployments is still in use) and their termination if it is needed. It might be useful to expose a REST API from your machine to let Test Drive shutdown callback notify it.

For more information...

For more information, refer to the following topics.

Next step

Next, configure your Test Drive in the Orbitera Framework.

Was this page helpful? Let us know how we did:

Send feedback about...

GCP Marketplace Partners