Working with Game On! Locally

Developing and testing your room locally in a production-like environment is an important aspect of Twelve factor applications, as it reduces the likelihood that what you create locally will fail in new and unexpected ways when activated in production.

Tip

Don’t fancy following all this, and installing everything over your most favorite dev box? Want something to just go do it all for you?

Have a look at our Vagrant project, that can take care of this for you, and leave you with a virtual machine ready for local development. The rest of the information on this page will still be useful for understanding what has been done inside the VM for you.

Using Docker

We use Docker (and specifically Docker Compose) to set up a local environment that is as similar as possible to what will run in production. We provide some pre-built images to help you do that, too.

We require docker-compose version 1.6.0 or greater.

Installing Docker varies by platform. On Windows and macOS, you can also choose between Docker native and Docker Toolbox. Both should work.

There is a conceptual difference between what happens on linux or Docker native vs. Docker Toobox on Windows or macOS that could use some explaining:

  • When you run natively, the "host" for your containers is the OS itself, so 127.0.0.1 will work just fine.

  • When you run in Docker Toolbox, there is a VirtualMachine acting as the host for your containers. This means that (for URLs and other things) you need to use the IP of the host.

The Game On! Root Project (gameon)

Game On! uses Docker Compose to coordinate the containers that run each of the game’s core services. In the root project there will be a docker-compose.yml that declares the various core services, and a docker-compose.override.yml.example that can be used when developing locally to help speed up round trip times.

When Game On! runs in the cloud, it uses etcd to obtain its configuration. When running locally it expects all this to be fed to it via the environment, which docker-compose.yml will setup per service from the gameon.env file.

SSH Keys and KeyStores

Because Game On! uses a Certificate for HTTPS and for JWT signing, we need to generate one for local use. The latter part is managed by Docker, with a special mapped volume providing a generated local keystore to containers. Both the go-setup.sh and go-build.sh scripts will generate the local keystore. If it ever needs to be replaced, delete the keystore directory, and run the appropriate script again.

Local Room Development

  1. Obtain the source for this repository:

    • HTTPS: git clone https://github.com/gameontext/gameon.git

    • SSH: git clone git@github.com:gameontext/gameon.git

  2. One time setup, this will ensure that required keystores are setup, and that you have an env file suitable for use with docker-compose (whether you’re using docker-machine or not).

    ./go-setup.sh

    This setup step also pulls the initial images required for running the system.

  3. Start Game ON platform services (amalgam8, couchdb, and an elk stack!). These services are configured in platformservices.yml.

    ./go-platform-services.sh start
  4. Start the core Game ON services (Player, Mediator, Map)

    ./go-run.sh start

    The go-run.sh script contains shorthand operations to help with starting, stopping, and cleaning up after Game ON core services (in docker-compose.yml).

Game On! is now running locally.

Carry on with building your rooms!

Core Game Development

If you change your mind, and decide you want to start hacking on a core game service, no worries! You can mix and match the two approaches, just clone the submodule(s) you wish to edit, and carry on with the steps below.

  1. Change to the gameon directory

    cd gameon
  2. Obtain the source for the project that you want to change. The easiest way is to take advantage of git submodules. Using the map service as an example:

    git submodule init map
    git submodule update map
  3. Build/initialize the projects (includes building wars and creating keystores required for local development).

    ./go-build.sh
  4. Start the game’s platform services (amalgam8, couchdb, and an ELK stack!). These services are configured in platformservices.yml.

    ./go-platform-services.sh start
  5. Build core game docker containers with docker-compose (see below)

    ./go-rebuild.sh all

Game On! is now running locally.

Tip
If you plan to edit projects with Eclipse, run eclipse.sh to generate the project files.

Building the Core service docker containers.

At this stage, we now have the keystores built, code compiled, and the platform services running. Before we can launch the game, we have to create/download the containers for the core services.

We provide a small script that can be used for this. It takes a list of the Game On projects to rebuild, and will:

  • Stop any old running container for that project

  • Rebuild the code for the project (if present)

  • Remove any old container for the project

  • Build a new container for that project

  • Launch the container using docker compose

  • Update the service proxy controller to route the correct version of the service.

Example 1. Rebuild All Game On Services
./go-rebuild.sh all
Example 2. Rebuild Selected Game On Services
./go-rebuild.sh auth proxy

After building all Game On Services, the game will now be running locally. * If you’re running a *nix variant, you can access it at http://127.0.0.1/ * If you’re running Mac or Windows, access it using the docker host IP address (see the notes below)

Tip
To view console logs from the running containers, use docker ps to find the name for the container that you wish to view the logs for, and then use docker logs containername, eg. docker logs gameon_auth_1. Alternately, you can use docker-compose logs <service_name>, e.g. docker-compose logs player.

If you are editing the core game services, then you may wish to take a look at how each service is available via local ports mapped by the docker-compose.yml configuration. Eg map will be available via https on port 9447 locally, as well as via it’s mapped url via proxy on port 80.

Tip
Many of the Game On services also have a simple "LogView" console to assist with debug during local development, look for the the LogView class in each project to figure out the endpoint address.

Notes

Supporting 3rd party auth

3rd party authentication (twitter, github, etc.) will not work locally, but the anonymous/dummy user will. If you want to test with one of the 3rd party authentication providers, you’ll need to set up your own tokens to do so.

Determining the host IP address (Docker Toolbox)

After you have Docker Toolbox installed, verify the host machine name: docker-machine ls. The default name is default, but if you’re a former Boot2Docker user, it may be dev instead. Substitute this value appropriately in what follows.

If you aren’t using the docker quick-start terminal, you’ll need to set the docker environment variables in your command shell using eval "$(docker-machine env default)".

Get the IP address for your host using docker-machine ip default.

go-build.sh and go-setup.sh will create a customized copy of gameon.env for the active DOCKER_MACHINE_NAME, that will perform the substitution to the associated IP address.

Top-down vs. incremental updates

If you want to try using incremental publish, where your changes are live inside the container without requiring the container to be stopped, started, rebuilt or otherwise messed with, you’ll need to add some lines to docker-compose.override.yml to create overlay volumes.

docker-compose.override.yml.example maps expected github subrepository paths as volumes. Copy snippets from that file for the services you’re interested in into docker-compose.override.yml.

Iterative development of Java applications with WDT

We highly recommend using WebSphere Developer Tools (WDT) to work with the Java services contained in the sample. Going along with the incremental publish support provided by the docker-compose-override.yml file, there is some (one time) configuration required to make WDT happy with the docker-hosted applications.

results matching ""

    No results matching ""