RIDE Server documentation is extracted from that of Codenvy on top of which RIDE Server is built.

Installation

Create a folder where you would like to save and launch RIDE Server. From that folder, with Docker 1.11+ (1.12.5+ recommended) on Windows, Mac, or Linux run:

docker run -i  -v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD/data:/data \
-e CODENVY_ADMIN_EMAIL=admin@example.com \
-e CODENVY_MACHINE_SERVER_EXTRA_VOLUME=$PWD/ride-share:/ride-share:ro \
rbrain/rideserver-cli:latest start

where data  is the folder that all your settings and workspaces are saved, admin@example.com  is administrator email address, and ride-share is the directory that you need to copy your license key file called key.json  into it. This key file is provided by your R-Brain sales representative when you purchase your RIDE Server license. 

rideserver-cli will take care of downloads and installation process and launches RIDE Server. It also conducts checks and reports to your terminal. A successful start will display like this:

WARN: Bound 'rbrain/rideserver-cli' to 'rbrain/rideserver-cli:5.16.0'
INFO: Proxy: HTTP_PROXY=, HTTPS_PROXY=, NO_PROXY=*.local, 169.254/16
WARN: Did not detect TTY - interactive mode disabled
INFO: (codenvy cli): 5.16.0 - using docker 17.09.0-ce / docker4mac
INFO: (codenvy config): Generating codenvy configuration...
INFO: (codenvy config): Customizing docker-compose for running in a container
INFO: (codenvy start): Preflight checks
         mem (1.5 GiB):           [OK]
         disk (100 MB):           [OK]
         port 80 (http):          [AVAILABLE]
         port 443 (https):        [AVAILABLE]
         port 2181 (zookeeper):   [AVAILABLE]
         port 5000 (registry):    [AVAILABLE]
         port 23750 (socat):      [AVAILABLE]
         port 23751 (swarm):      [AVAILABLE]
         port 32000 (jmx):        [AVAILABLE]
         port 32001 (jmx):        [AVAILABLE]
         conn (browser => ws):    [OK]
         conn (server => ws):     [OK]
INFO: (codenvy start): Starting containers...
INFO: (codenvy start): Services booting...
INFO: (codenvy start): Server logs at "docker logs -f codenvy_codenvy_1"
INFO: (codenvy start): Postflight checks
         (192.168.65.2:23750/info):  [OK]
INFO: (codenvy start): Booted and reachable
INFO: (codenvy start): Ver: 5.16.0
INFO: (codenvy start): Use: http://localhost:80
INFO: (codenvy start): API: http://localhost:80/swagger 

Your RIDE Server is ready to launch from your http://<Your-Server-IP>:80 

Logs and User Data

When RIDE Server initializes itself, it stores logs, user data, database data, and instance-specific configuration in the folder mounted to :/data/instance or an instance subfolder of what you mounted to :/data.  

RIDE Server's containers save their logs in the same location:

/instance/logs/codenvy/<year>               # Server logs
/instance/logs/codenvy/che-machine-logs     # Workspace logs
/instance/logs/nginx                        # nginx access and error logs
/instance/logs/haproxy                      # HAproxy logs

User data is stored in:

/instance/data/codenvy                      # Project backups (we synchronize projs from remote ws here)
/instance/data/postgres                     # Postgres data folder (users, workspaces, stacks etc)
/instance/data/registry                     # Workspace snapshots

Instance configuration is generated by RIDE Server and is updated by our internal configuration utilities. These 'generated' configuration files should not be modified:

/instance/codenvy.ver.do_not_modify         # Version of Codenvy installed
/instance/docker-compose-container.yml      # Docker compose to launch internal services
/instance/docker-compose.yml                # Docker compose to launch Codenvy from the host without contianer
/instance/config                            # Configuration files which are input mounted into the containers

Volume Mounts

We use volume mounts to configure certain parts of RIDE Server.
At a minimum, you must volume mount a local path to :/data, which will be the location that RIDE Server installs its configuration, user data, version and log information. RIDE Server also leaves behind a cli.log file in this location to debug any odd behaviors while running the system. In this folder we also create a codenvy.env file which contains all of the admin configuration that you can set or override in a single location.

You can also use volume mounts to override the location of where your user or backup data is stored. By default, these folders will be created as sub-folders of the location that you mount to :/data. However, if you do not want your /instance, and /backup folder to be children, you can set them individually with separate overrides.

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock
                    -v <local-path>:/data
                    -v <a-different-path>:/data/instance
                    -v <another-path>:/data/backup
                       rbrain/rideserver-cli:<version> [COMMAND]

Hostnames

The IP address or DNS name of where the Codenvy endpoint will service your users. RIDE Server will attempt to auto-set the hostname (CODENVY_HOST) by running an internal utility docker run --net=host eclipse/che-ip:nightly. This approach is not fool-proof. This utility is usually accurate on desktops, but often fails on hosted servers. If it fails you can explicitly set this value when executing the docker run:

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock
                    -v <local-path>:/data
                    -e CODENVY_HOST=<your-ip-or-host>
                       rbrain/rideserver-cli:<version> [COMMAND]

Alternatively, you can edit the CODENVY_HOST value in codenvy.env.

Proxy Installation

RIDE Server can be installed and operated from behind a proxy:

1. Configure each physical node's Docker daemon with proxy access.

2. Optionally, override the default workspace proxy settings for users if you want to restrict their Internet access.

Before starting RIDE Server, configure Docker's daemon for proxy access. If you plan to scale RIDE Server with multiple host nodes, each host node must have its Docker daemon configured for proxy access. If you have Docker for Windows or Docker for Mac installed on your desktop and installing RIDE Server, these utilities have a GUI in their settings which let you set the proxy settings directly.

HTTP_PROXY and/or HTTPS_PROXY set in the Docker daemon must have a protocol and port number. Proxy configuration is quite finnicky, so please ensure you have provided a fully qualified proxy location.

If you configure HTTP_PROXY or HTTPS_PROXY in your Docker daemon, RIDE Server automatically adds localhost,127.0.0.1,codenvy-swarm,CODENVY_HOST to your NO_PROXY value where CODENVY_HOST is the DNS or IP address. We recommend that you check these values in your codenvy.env and add the short and long form DNS entry to your Docker's NO_PROXY setting if it is not already set.

This is the full set of proxy-related values in the codenvy.env. You can optionally modify these with different values.

CODENVY_HTTP_PROXY_FOR_CODENVY=<YOUR_PROXY_FROM_DOCKER>
CODENVY_HTTPS_PROXY_FOR_CODENVY=<YOUR_PROXY_FROM_DOCKER>
CODENVY_NO_PROXY_FOR_CODENVY=localhost,127.0.0.1,codenvy-swarm,<YOUR_CODENVY_HOST>

CODENVY_HTTP_PROXY_FOR_CODENVY_WORKSPACES=<YOUR_PROXY_FROM_DOCKER>
CODENVY_HTTPS_PROXY_FOR_CODENVY_WORKSPACES=<YOUR_PROXY_FROM_DOCKER>
CODENVY_NO_PROXY_FOR_CODENVY_WORKSPACES=localhost,127.0.0.1,<YOUR_CODENVY_HOST>

The last three entries are injected into workspaces created by your users. This gives your users access to the Internet from within their workspaces. You can comment out these entries to disable access. 

If you create a workspace from a custom recipe, and there are any sudo commands being executed as part of Dockerfile instructions, make sure you use sudo -E, for example sudo -E apt-get install python -y 

DNS Resolution

The default behavior is for RIDE Server and its workspaces to inherit DNS resolver servers from the host. You can override these resolvers by setting CODENVY_DNS_RESOLVERS in the codenvy.env file and restarting RIDE Server. DNS resolvers allow programs and services that are deployed within a user workspace to perform DNS lookups with public or internal resolver servers. In some environments, custom resolution of DNS entries (usually to an internal DNS provider) is required to enable the RIDE server and the workspace runtimes to have lookup ability for internal services.

# Update your codenvy.env with comma separated list of resolvers:
CODENVY_DNS_RESOLVERS=10.10.10.10,8.8.8.8

Firewall Tests

Firewalls will typically cause traffic problems to appear when you are starting a new workspace or adding a new physical node for scaling. There are certain network configurations where we direct networking traffice between workspaces and RIDE Sthrough external IP addresses, which can flow through routers or firewalls. If ports or protocols are blocked, then certain Codenvy functions will be unavailable.

Running Codenvy Behind a Firewall (Linux/Mac)

# Check to see if firewall is running:
systemctl status firewalld

# Check for list of open ports
# Verify that ports 80tcp, 443tcp, 2376tcp, 4789udp, 7946tcp/udp, 23750tcp, 32768-65535tcp are open
firewall-cmd --list-ports

# Optionally open ports on your local firewall:
firewall-cmd --permanent --add-port=80/tcp
... and so on

# You can also verify that ports are open:
nmap -Pn -p <port> localhost

# If the port is closed, then you need to open it by editing /etc/pf.conf.
# For example, open port 1234 for TCP for all interfaces:
pass in proto tcp from any to any port 1234

# And then restart your firewall

If you are going to be scaling RIDE Server with additional workspace nodes, then each workspace node also needs to have ports 2375 tcp, 2376 tcp, 4789 udp, 7946 tcp/udp, and 32768-65535 tcp are open on each node.

If you are going to use the embedded Zabbix monitor that is deployed with RIDE Server, then you must also have port 10050 tcp open on the master node and the workspace nodes.

Running RIDE Server Behind a Firewall (Windows)

There are many third party firewall services. Different versions of Windows OS also have different firewall configurations. The built-in Windows firewall can be configured in the control panel under "System and Security":

  1. In the left pane, right-click Inbound Rules, and then click New Rule in the action pane.

2. In the Rule Type dialog box, select Port, and then click Next.
3. In the Protocol and Ports dialog box, select TCP.
4. Select speicfic local ports, enter the port number to be opened and click Next.
5. In the Action dialog box, select Allow the Connection, and then click Next.
6. In the Name dialog box, type a name and description for this rule, and then click Finish.

Offline Installation

We support offline (disconnected from the Internet) installation and operation. This is helpful for  restricted environments, regulated datacenters, or offshore installations. The offline installation downloads the CLI, core system images, and any stack images while you are within a network DMZ with DockerHub access. You can then move those files to a secure environment and start RIDE Server.

1. Save RIDE Server Images

While connected to the Internet, download RIDE Server's Docker images:

docker run rbrain/rideserver-cli offline

The CLI will download images and save them to /backup/*.tar with each image saved as its own file. You can save these files to a differnet location by volume mounting a local folder to :/data/backup. The version tag of the CLI Docker image will be used to determine which versions of dependent images to download. There is about 1GB of data that will be saved.

The default execution will download none of the optional stack images, which are needed to launch workspaces of a particular type. 

2. Start Codenvy In Offline Mode

Place the TAR files into a folder in the offline computer. If the files are in placed in a folder named /tmp/offline, you can run R in offline mode with:

# Load the CLI
docker load < /tmp/offline/codenvy_cli:<version>.tar

# Start RIDE Server in offline mode
docker run <other-properties> -v /tmp/offline:/data/backup rbrain/rideserver-cli:<version> start --offline

The --offline parameter instructs RIDE Server CLI to load all of the TAR files located in the folder mounted to /data/backup. These images will then be used instead of routing out to the Internet to check for DockerHub. The preboot sequence takes place before any CLI functions make use of Docker. The RIDE Server start, download, and init commands support --offline mode which triggers this preboot sequence.

Uninstall

# Remove your RIDE Server configuration and destroy user projects and database
docker run rbrain/rideserver-cli:<version> destroy [--quiet|--cli]

# Delete RIDE Server's images from your Docker registry
docker run rbrain/rideserver-cli:<version> rmi

# Delete the RIDE Server CLI
docker rmi -f rbrain/rideserver-cli

System Requirements

RIDE Server installs on Linux, Mac and Windows.

Hardware

RIDE Server requires a minimum of:

  • 2 cores
  • 4GB RAM
  • 10GB disk space

RIDE Server services require 2 GB storage and 4 GB RAM. The RAM, CPU and storage resources required for your users' workspaces are additive. RIDE Server's Docker base images consume ~7GB of disk and the Docker images for your workspace templates can each range from 5MB up to 1.5GB. RIDE Server and its dependent core containers will consume about 500MB of RAM, and your running workspaces will each require at least 1.5GB RAM, depending upon user requirements and complexity of the workspace code and intellisense. 

Boot2Docker, docker-machine, Docker for Windows, and Docker for Mac are all Docker variations that launch VMs with Docker running in the VM with access to Docker from your host. We recommend increasing your default VM size to at least 4GB. Each of these technologies have different ways to allow host folder mounting into the VM. Please enable this for your OS so that RIDE Server data is persisted on your host disk.

Software

  • Docker 1.11+ (1.12.5+ recommended)

The RIDE Server CLI - a Docker image - manages the other Docker images and supporting utilities that RIDE Server uses during its configuration or operations phases. The CLI also provides utilities for downloading an offline bundle to run RIDE Server while disconnected from the network.

Given the nature of the development and release cycle it is important that you have the latest version of docker installed because any issue that you encounter might have already been fixed with a newer Docker release.

Install the most recent version of the Docker Engine for your platform using the official Docker releases, including support for Mac and Windows!  If you are on Linux, you can also install with wget -qO- https://get.docker.com/ | sh.

Sometimes Fedora and RHEL/CentOS users will encounter issues with SElinux. Try disabling selinux with setenforce 0 and check if resolves the issue. 

IP Addresses

The hostname or IP address that you give to the RIDE Server master node (and any optional workspace nodes) must be externally reachable by each browser. In scalability mode, you can create a cluster of workspace nodes by connecting different Docker daemons together. Even though the cluster is an internal object, each workspace node must be listening on a publicly reachable IP address or hostname.

Required Ports

RIDE Server's runtime launches a group of Docker containers in a compose relationship. The master node is where RIDE Server is installed and running. In a scalability mode, you can add additional physical "machine" nodes which runs the developer to increase system capacity.

Master Node

If you have not added any additional physical workspace nodes, then the RIDE Server master node runs core services and workspaces.

Master Node: External Ports

All ports are TCP unless otherwise noted.

Master Node: Internal Ports

All ports are TCP unless otherwise noted.

Machines Nodes
You can add as many workspace nodes as required to handle additional demand.

Machine Nodes

You can add as many workspace nodes as required to handle additional demand.

Machines Node: External Ports

All ports are TCP unless otherwise noted.

The Docker daemon will need to be remotely accessed by RIDE Server, so it has to be setup to use a TCP socket. This port only needs to be accessible to the RIDE Server master node.

Machines Node: Internal Ports

All ports are TCP unless otherwise noted.

Get Help

If you are having a problem starting RIDE Server or workspaces, there is a diagnostic utility that can help: docker run rbrain/rideserver-cli info
on the command-line for diagnosing boot-time issues.

  • Output of docker run rbrain/rideserver-cli info  command
  • If requested, a support package with docker run rbrain/rideserver-cli info --bundle  

In case of problems in installing RIDE Server please contact support@r-brain.io

Did this answer your question?