Installing Rocket.Chat on Raspberry Pi 2

Installing Rocket.Chat on Raspberry Pi 2

The goal is to get Rocket.Chat running on a Raspberry Pi 2. And what a crazy path it took me on.

Rocket.Chat is an open source chat server that is easy to use and has lots of features that support communication and sharing of links and files. It is going to be setup as a private chat platform for Volume Integration to increase collaboration.

Rocket Chat at Volume

I decided to start at the beginning with NOOBS and Raspbian OS. My research indicated that Rocket.Chat had been installed and run on a Raspberry Pi. Rocket.Chat requires node.js, npm, Meteor, MongoDB. I started by following some directions for installing Meteor. Then ran into major issues getting node and MongoDB to install. At this point I realized that I had a Pi 2 and that many popular packages did not have binaries for it. The newest MongoDB only runs on 64 bit architectures and the Pi is ARM which is 32 bit.

Raspberry Pi Logo

After much searching and compiling of different versions of MongoDb and Node, following the installation without docker instructions, the Rocket.Chat RockOnPi Community released Rocket Chat Raspberry Pi directions. They should have a build soon for the Raspberry Pi Zero. These directions worked with Raspbian. But they call for using mongolab.com for the mongo database. I could not find a build of MongoDB that worked on the Raspberry Pi 2.

But the goal was to get it all working on a single Raspberry Pi. There is a version of Linux called Arch Linux that has MongoDB 3.2. So the first step was to install it. This was a side adventure documented in Installing Arch Linux on Raspberry Pi 2. The major issue is that as of NOOBS 1.5 there was no support to install Arch Linux on the Raspberry Pi 2 using the NOOBS installer. This required me to write the package to the SDcard and boot from there.

Mongo DB

First I installed Mongo DB because it was the hardest part on Raspbian. There was not a build that would work on the Raspberry Pi 2 and support Rocket.Chat. I found instructions for an old version that did work but it was too old for Rocket.Chat.

One item of note with MongoDB on Raspberry Pi is that ARM is inherently a 32 bit OS. This means that MongoDB will support 2 GB database sizes. Sing Li who is a contributor to the Rocket.Chat project told me on their demo chat server,

“that’s by no means limitation for Rocket.Chat ???? a 2 GB mongodb database IS VERY LARGE ! For reference … this demo server with 38,000 registered users and close to 300,000 messages has a database that is less than 2 GB in size (for message storage). Hopefully the Pi server is expected to handle a little less.”

First the dependencies as root. In Arch Linux login to root. Arch Linux has the default user of alarm. If you are connecting via ssh login as alarm from there you can su root.

pacman -S npm
pacman -S curl
pacman -S graphicsmagick
pacman -S mongodb

This caused issues with incompatible versions. So I ran:

pacman -Syu mongodb

Now we need to make a data directory for Mongo.
mkdir /data/
mkdir /data/db

If your ‘/data/db’ directory doesn’t have the permissions and ownership above, do this:

First check what user and group your mongo user has:
# grep mongo /etc/passwd
mongod:x:498:496:mongod:/var/lib/mongo:/bin/false

You should have an entry for mongod in /etc/passwd , as it’s a daemon.
chmod 0755 /data/db
chown -R 498:496 /data/db # using the user-id , group-id
ls -ld /data/db/
drwxr-xr-x 4 mongod mongod 4096 Oct 26 10:31 /data/db/

The left side ‘drwxr-xr-x’ shows the permissions for the User, Group, and Others. ‘mongod mongod’ shows who owns the directory, and which group that directory belongs to. Both are called ‘mongod’ in this case.

Now try and start mongodb to see if it works. On 32 bit architectures you must start it with the mmapv1 storage engine.
mongod --storageEngine=mmapv1

In theory you should enable mongodb so it will startup on boot.

Modify the /usr/lib/systemd/system/mongodb.service file with the storage engine settings.
systemctl enable mongodb.service

But having it run as a service caused issues when starting Rocket.Chat. Rocket Chat says that the database driver version 2.7 is incompatible. So for now I run it as a regular user with the mongod command. To have it continue running on logout install screen.

pacman -S screen screen mongod --storageEngine=mmapv1

Some of these following directions are based on https://github.com/RocketChat/Rocket.Chat.RaspberryPi

Meteor and NPM install

The easiest way to get both is to clone from the Meteor universal project.

As a user that is not root follow this:
cd ~ git clone --depth 1 https://github.com/4commerce-technologies-AG/meteor.git

then

$HOME/meteor/meteor -v

Rocket Chat Install

I received some great help and encouragement from the Raspberry Pi community on the Rocket.Chat chat site.

You do not need to be root to perform this step.

Download the Rocket.Chat binary for Raspberry Pi

cd $HOME
mkdir rocketchat
cd rocketchat
curl https://cdn-download.rocket.chat/build/rocket.chat-pi-develop.tgz -o rocket.chat.tgz
tar zxvf rocket.chat.tgz

This will download and untar the app in $HOME/rocketchat

After some trial and error I discovered that some dependencies were needed. make, gcc and python2. Root must run pacman.
pacman -S python2
pacman -S make
pacman -S gcc

Now try the install procedure but use python 2.7 and –save will show any errors that happen. I used –save to figure out that I did not have gcc (g++) installed.

cd ~/rocketchat/bundle/programs/server

~/meteor/dev_bundle/bin/npm install --python=python2.7 --save

Testing to make sure it works

export PORT=3000

export ROOT_URL=http://your url or ip address:3000 export

export MONGO_URL=mongodb://localhost:27017/rocketchat

$HOME/meteor/dev_bundle/bin/node main.js

Linger

To keep session running after logout in Arch Linux use:
loginctl enable-linger alarm

Run on Startup

Unfortunately I have been unable to get Rocket Chat to recognize MongoDB when Mongo is running as a service on Arch Linux. It says that the version of the database driver is not compatible. For now I start Mongo up as a user and place it in the background.

Next Steps

The next steps are to configure the Rocket Chat server to startup on boot and run on SSL. We want to protect those chats flowing between our employees. Follow this blog and volumeint twitter to find out about the next posts on how to install Arch Linux on Raspberry Pi 2 and how to get a free SSL certificate for your chat server.

What's the Best Tool to Monitor Redis_

What's the Best Tool to Monitor Redis?

High volume services like Twitter, Pinterest, and Flickr use Redis to deliver small pieces of information very quickly. Redis is ideal for these applications because it stores data in memory and on disk at the same time. Retrieving data from the rows and columns of a database can be slow, so Redis stores data in key-value pairs.

Volume Integration uses Redis in our software product called Volume Analytics. Out of the box, Redis is manipulated via the command line, but we wanted a web interface and monitoring tool to track memory usage and the up/down status of Redis. So we set out to find the best tool for the job.

The tools we evaluated were Reddish, Redis Commander, and Redmon. I was able to install all of them in one day, so the installation is fairly easy.

Reddish

Reddish is programmed in Node.js and has a very basic interface. It allows for web searching of keys by name or wildcards. The interface also allows editing of values in the datastore.

Reddish Console

Reddish was not the right tool for us, as it did not have any way to monitor the service itself.

Redis Commander

Redis Commander is also built with Node.js and includes a tree-based navigation of the data with counts of how much data is in each folder. It also enables users to change the configuration settings of the Redis server through the tree.

Redis Commander Configuration

The interface allows for modification of the configuration and the data elements. At the bottom of the window, Redis Commander provides access to the command-line interface. Redis Commander includes many screens and different options to manage the data in Redis.

Edit Key Values

We found that if there is a lot of data in Redis, it can take a while for Redis Commander to load the data into the tree. This tool also did not meet our need to monitor the service for usage and uptime.

Redmon

Redmon is programmed in Sinatra and was the easiest to install. Just run gem install redmon, then start the Redmon server with a single command.

Redmon Monitor Screen

Redmon contains only three screens: monitoring dashboard, CLI, and configuration control. The first screen was exactly what we needed – a graph showing the performance and usage of the system.

Configuring Redis Redmon

The configuration tab allows us to change the settings of the Redis server to improve performance.

The video below shows the entire Redmon interface.

 

 

 

 

 

 

 

 

 

Redmon Demo

Redmon Demo

Evaluation

After our evaluation, we selected Redmon. It was the only product that was a monitoring tool. Plus, it fit well into our system since we already use Ruby for other parts of the application. Redis Commander and Reddish would be more suitable for projects that need a visual interface to manage data within Redis.

Let us know how you are using Redis and what interface you use. What is your favorite tool for managing Redis?

 

To learn more about Volume Labs and Volume Integration, please follow us on Twitter @volumeint and check out our website.

Assessing Organizational Risk with CloudTrail

Assessing Organizational Risk with CloudTrail

Airplanes leave trails in the clouds to let us know where they’ve been. (flickr.com/Vicki Burton)

Recently, we’ve been experimenting with collecting CloudTrail data from Amazon Web Services (AWS). Here is a description of CloudTrail, according to the FAQ:

AWS CloudTrail is a web service that records API calls made on your account and delivers log files to your Amazon S3 bucket. …CloudTrail provides visibility into user activity by recording API calls made on your account. CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and to troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards.
We use CloudTrail to further enhance our clients’ knowledge of organizational risk with AWS. We collect AWS information from CloudTrail, load it into our Volume Analytics product, and use the information to reinforce our risk models. While this short tutorial focuses on CloudTrail, the code can be used to read any data from S3.

Getting Started

What you need:

  • Cloud Trail enabled on your AWS instance
  • S3 enabled on your AWS instance
  • Java AWS SDK

Collect CloudTrail Data

create an S3 bucket

 

configure CloudTrail

 

enable API access

 

 

  1. Finally, you can use the AWS Java API to pull CloudTrail data from S3. Make sure you configure the AwsCredentials file with the correct accessKey and secretKey. Here is my sample code for pulling data from S3 in CloudTrailTest.java on GitHub:
//package com.volume.hooks.s3;
import com.amazonaws.auth.ClasspathPropertiesFileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import java.util.Iterator;
import java.util.zip.GZIPInputStream;
public class CloudTrailTest {
public static AmazonS3 s3;
public static Region usEast1;
public static void main(String[] args) throws IOException, InterruptedException {
while (true) {
//Read credentials from AwsCredentials.properties
s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
//Set your AWS region
usEast1 = Region.getRegion(Regions.US_EAST_1);
s3.setRegion(usEast1);
//Name of the S3 bucket containing CloudTrail JSON
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
.withBucketName(“vatraildata”));
//Iterate through all objects in the CloudTrail bucket
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
System.out.println(“Downloading an object:” + objectSummary.getKey());
S3Object object = s3.getObject(new GetObjectRequest(“vatraildata”, objectSummary.getKey()));
System.out.println(“Content-Type: ” + object.getObjectMetadata().getContentType());
//If the object contains content, treat it as a file
if (objectSummary.getSize() > 0) {
displayTextInputStream(object.getObjectContent());
// Optional: Delete the file after it has been read.
//s3.deleteObject(“vatraildata”, object.getKey());
}
}
}
}
private static void displayTextInputStream(InputStream input) throws IOException {
//All of the files are GZipped JSON
GZIPInputStream gzipStream = new GZIPInputStream(input);
Reader decoder = new InputStreamReader(gzipStream, “US-ASCII”);
BufferedReader reader = new BufferedReader(decoder);
String json = “”;
while (true) {
String line = reader.readLine();
json += line;
if (line == null) {
break;
}
}
// Use your favorite JSON parser and go to town!
ObjectMapper m = new ObjectMapper();
JsonNode rootNode = m.readTree(json);
JsonNode records = rootNode.path(“Records”);
Iterator recordItr = records.iterator();
while (recordItr.hasNext()) {
JsonNode node = (JsonNode) recordItr.next();
JsonNode userIdentity = node.path(“userIdentity”);
JsonNode accountIdNode = (JsonNode) userIdentity.path(“accountId”);
System.out.println(“accountId:” + accountIdNode.asText());
JsonNode typeNode = (JsonNode) userIdentity.path(“type”);
System.out.println(“type:” + typeNode.asText());
JsonNode principalNode = (JsonNode) userIdentity.path(“principalId”);
System.out.println(“principalId:” + principalNode.asText());
JsonNode arnNode = (JsonNode) userIdentity.path(“arn”);
System.out.println(“arn:” + arnNode.asText());
JsonNode accessKeyIdNode = (JsonNode) userIdentity.path(“accessKeyId”);
System.out.println(“accessKeyId:” + accessKeyIdNode.asText());
JsonNode eventName = node.path(“eventName”);
System.out.println(“event:” + eventName.asText());
JsonNode ip = node.path(“sourceIPAddress”);
System.out.println(“ip:” + ip.asText());
JsonNode dateTime = node.path(“eventTime”);
System.out.println(“eventTime:” + dateTime.asText());
System.out.println(“—-“);
}
gzipStream.close();
decoder.close();
reader.close();
}
}
view rawCloudTrailTest hosted with ❤ by GitHub

Conclusion

CloudTrail is extremely helpful for gaining detailed insight into your AWS environment. As we’ve shown in this tutorial, it’s very easy to configure and pull the information into your own applications. Let us know about your experience with CloudTrail and what you discovered about your organizational risk in the comments.

To learn more about Volume Labs and Volume Integration, please follow us on Twitter @volumeint and check out our website.