A Volume Analytics Flow for Finding Social Media Bots

Volume Analytics Chaos Control

Volume Analytics Chaos Control

Volume Analytics is a software tool used to build, deploy and manage data processing applications.

Volume Analytics is a scalable data management platform that allows the rapid ingest, transformation, and loading high volumes of data into multiple analytic models as defined by your requirements or your existing data models.

Volume Analytics is a platform for streaming large volumes of varied data at high velocity.​

Volume Analytics is a tool that both enables rapid software development and operational maintainability with scalability for high data volumes. Volume Analytics can be used for all of your data mining, fusion, extraction, transform and loading needs. Volume Analytics has been used to mine and analyze social media feeds, monitor and alert on insider threats and automate the search for cyber threats. In addition it is being used to consolidate data from many data sources (databases, HDFS, file systems, data lakes) and producing multiple data models for multiple data analytics visualization tools. It could also be used to consolidate sensor data from IoT devices or monitor a SCADA industrial control network.

Volume Analytics easily facilitates a way to quickly develop highly redundant software that’s both scalable and maintainable. In the end you save money on labor for development and maintenance of systems built with Volume Analytics.

In other words Volume Analytics provides the plumbing of a data processing system. The application you are building has distinct units of work that need to be done. We might compare it to a water treatment plant. Dirty water comes in to the system in a pipe and comes to a large contaminate filter. The filter is a work task and the pipe is a topic. Together they make a flow.

After the first filter another pipe carries the water minus the dirt to another water purification worker. In the water plant there is a dashboard for the managers to monitor the system to see if they need to fix something or add more pipes and cleaning tasks to the system.

Volume Analytics provides the pipes, a platform to run the worker tasks and a management tool to control the flow of data through the system.

A Volume Analytics Flow for Finding Social Media Bots

A Volume Analytics Flow for Finding Social Media Bots

In addition Volume Analytics has redundancy for disaster recovery, high availability and parallel processing. This is where our analogy fails. Data is duplicated across multiple topics. The failure of a particular topic (pipe)  does not destroy any data because it is preserved on another topic. Topics are optimally setup in multiple data centers to maintain high availability.

In Volume Analytics the water filter tasks in the analogy are called tasks. Tasks are groups of code that perform some unit of work. Your specific application will have its own tasks. The tasks are deployed on more than one server in more than one data center.


Faster start up time saves money and time.

Volume Analytics allows a faster start up time for a new application or system being built. The team does not need to build the platform that moves the data to tasks. They do not need to build a monitoring system as those features are included. However, Volume Analytics will integrate with your current monitoring systems.

System is down less often

The DevOps team gets visibility into the system out of the box. They do not have to stand up a log search system. So it saves time. They can see what is going on and fix it quickly.

Plan for Growth

As your data grows and the system needs to process more data Volume Analytics grows. Add server instances to increase the processing power.  As work grows Volume Analytics allocates work to new instances. There is no re-coding needed. Save time and money as developers are not needed to re-implement the code to work at a larger scale.

Less Disruptive deployments

Construct your application in a way that allows for deployments of new features with a lower impact on features in production. New code libraries and modules can be deployed to the platform and allowed to interact with the already running parts of the system without an outage. A built in code library repository is included.

In addition currently running flows can be terminated while the data waits on the topics for the newly programmed flow to be started.

This Flow processes files to find IP addresses, searches multiple APIs for matches and inserts data into a HANA database

This Flow processes files to find IP addresses, searches multiple APIs for matches and inserts data into a HANA database

A data processing search threats flow in production. Each of the boxes is a task that performs a unit of work. The task puts the processed data on the topic represented by the star. Then the next task picks up the data and does another part of the job. The combination of a set of tasks and topics is a flow.

Geolocate IP Flow

Geolocate IP Flow

Additional flow to geolocate IP addresses added as the first flow is running.

Combined Flows

Combined Flows

The combination of flows working together. The topic ip4-topic is an integration point.


Volume Analytics is modular and tasks are reusable. You can reconfigure your data processing pipeline without introducing new code. You can use tasks in more than one application.

Highly Available

Out of the box, Volume Analytics highly available due to its built in redundancy. Work tasks and topics (pipes) run in triplicate. As long as your compute instances are in multiple data centers you will have redundancy built in. Volume Analytics knows how to balance the data between duplicate and avoid data loss if one or more work tasks fail — this extends to the concept of queuing up work if all work tasks fail.


Volume Analytics integrates with other products. It can retrieve and save data to other systems like topics, queues, databases, file systems and data stores. In addition these integrations happen over encrypted channels.

In our sample application CyberFlow there are many tasks that integrate with other systems. The read bucket task reads files from an AWS S3 bucket, the ThreatCrowd is an API call to https://www.threatcrowd.org and Honeypot calls to https://www.projecthoneypot.org. Then the insert tasks integrate to the SAP HANA database used in this example.

Volume Analytics integrates with your enterprise authentication and authorizations systems like LDAP, ActiveDirectory, CAP and more.

Data Management

Ingests datasets from throughout the enterprise, tracking each delivery and routing it through Volume Analytics to extract the greatest benefit. Shares common capabilities such as text extraction, sentiment analysis, categorization, and indexing. A series of services make those datasets discoverable and available to authorized users and other downstream systems.

Data Analytics

In addition, to the management console Volume Analytics comes with an notebook application. This allows a data scientist or analyst to discover and convert data into information on reports. After your data is processed by Volume Analytics and put into a database the Notebook can be used to visualize the data. The data is sliced and diced and displayed on graphs, charts and maps.

Volume Analytics Notebook

Flow Control Panel

Topic Control Panel

The Flow control panel allows for control and basic monitoring of flows. Flows are groupings of tasks and topics working together. You can stop, start and terminate flows. Launch additional flow virtual machines when there is heavy load of data processing work from this screen. The panel also gives access to start up extra worker tasks as needed. There is also a link that will allow you to analyze the logs in Kibana

Topic Control Panel

Topic Control Panel

The topic control panel allows for the control and monitoring of topics. Monitor and delete topics  from here.

Consumer Monitor Panel

Consumer Monitor Panel

The consumer monitor panel allows for the monitoring of consumer tasks. Consumer tasks are the tasks that read from a topic. They may also write to a topic. This screen will allow you to monitor that the messages are being processed and determine if there is a lag in the processing.

Volume Analytics is used by our customers to process data from many data streams and data sources quickly and reliably. In addition, it has enabled the production of prototype systems that scale up into enterprise systems without rebuilding and re-coding the entire system.

And now this tour of Volume Analytics leads into a video demonstration of how it all works together.

Demonstration Video

This video will further describe the features of Volume Analytics using an example application which parses ip addresses out of incident reports and searches other systems for indications of those IP addresses. The data is saved into a SAP HANA database.

Request a Demo Today

Volume Analytics is scalable, fast, maintainable and repeatable. Contact us to request a free demo and experience the power and efficiency of Volume Analytics today.


Future Fear by BardIonson.com

Countering Computational Propaganda

Future Fear

Countering Computational Propaganda

There is something new happening in computer science and social media. It is computational propaganda. Computational propaganda is the use of computer information systems for political purposes according to the journal Big Data. This would also include the efforts of governments to influence public opinion of another country in order to change the foreign relations and policy of that country or to cause dissent of citizens against their government.

Countering Propaganda

The first step and most important to take is to realize that propaganda is real. And then to admit that propaganda does impact the thought process of citizens. So I will start with the first step of describing computational propaganda and then ways to computationally identify it.

Elements of Computational Propaganda

There are various elements to computational propaganda that I will attempt to outline. Some of the elements include: algorithms, automation, human curation source, artificial intelligence, social media bots, sock puppets, troll farms, cyber attacks and stolen information, disinformation and data science.

This new generation of propaganda has gone through a process of computerized automation. It is not totally automated but computers, networks and the internet have made it possible to deliver it in an automated manner. It is also enabling the delivery in a personal manner, in a way that makes it seem to be coming from a real human. Technology is also making strides in the automatic production of propaganda.

Previously propaganda was slower moving and required getting news and editorials published. One used to have to own or control media outlets and then put people in place to spread disinformation. Our new modern way of spreading information online with social media has lowered the cost of spreading propaganda.

What is propaganda?

Propaganda is not just pure lies and conspiracy theories. It has come a long way from the Nazi and the communist USSR mode of operation that might obvious to us now. Current propaganda is infused with truth although it is also either partly true or out of context in hidden ways to individual audiences. This information can now be tailored and targeted at individuals. Often this information is geared to lead people to action as all good propaganda should.

Jacques Ellul in Propaganda The formation of Men’s Attitudes says that it is difficult to determine what propaganda is because it is a “secret action”. Ellul finds it impossible to pin down an exact definition of propaganda that did not take up an entire book. Our society has propaganda baked in that we are not aware of. In short, propaganda for Ellul covered the following areas:

  • Psychological Action – seeks to modify opinions by pure psychological methods
  • Psychological Warfare – destroy the morale of an adversary so that the opponent begins to doubt the validity of his beliefs and actions
  • Brainwashing – only used on prisoners to turn enemies to allies
  • Public and Human Relations – These group activities are also propaganda as they seek to adapt a person to a society. As we will see foreign governments use this technique in social media today.

Ellul says: “modern propaganda is based on scientific analysis of psychology and sociology… the propagandist builds his techniques on the basis of his knowledge of man” Then the effects of the propaganda are measured for results and refined.

These thoughts written in 1956 have only continued to be refined. They are now at work on the internet and in social media. And now there is a faster way to measure the results with computers and tracking of online activity.

In reality, there are competing propaganda efforts happening online. There is a democratic propaganda that competes with anti-democratic strains and militant islamic ones also.

Computational Propaganda in Action

But to take a step back I want to outline what this looks like online at the moment as I understand it.


Computational propaganda burst onto the awareness of the public during the 2016 US presidential election and continues today. This effort by Russian actors over many years and I will outline some of the features that are computational in nature. The entire propaganda system is co-dependant but attempts to hide as disconnected entities.

Recently it has been reported that Russian based accounts on Facebook and Twitter have been spreading propaganda to divide the American public. They used Facebook to support both sides of the Black Lives Matter protests, promote gun rights and anti-immigration. This has also been observed on Reddit and Twitter. Some accounts were bots or semi-automated bots and other accounts were used to purchase targeted advertisements.

Cyber War

One element was stealing information through cyber attacks and social engineering. This exhibited itself in the taking of information from the Democratic National Convention and utilizing classified leaks given to WikiLeaks.

Personal information on American citizens by foreign intelligence services has been stolen. Security questionnaires were taken by Chinese services. And recently credit information was appropriated in addition to voter rolls from an unknown number of states during the elections. This personal information is alleged to have enabled the more precise targeting of propaganda to specific populations.

Propaganda Generation

Then this information was selectively transformed into propaganda by taking it out of context and targeting it at select audiences. I can assume that producers of the propaganda used computer software to search the massive amounts of information that was stolen. Then they edited the most damaging of the information to refine it for maximum effect on specific audiences.

Targeting Propaganda

This was also the case for the efforts of the Trump campaign where they took comments damaging to an opponent and targeted voter suppression ads at specific people. Social media ad networks and databases were used in these efforts. These systems allow anyone to target a message at thousands of different personal identifiers, location and income brackets. This can be done on Facebook and Twitter.

Targeting Ad on Facebook

Targeting Ad on Facebook

The advertisement above is being targeted at people in NY City who engage with liberal political content and have a net worth between 1 million and 2 million.

Personal Propaganda

In addition there are stores of personal data that was correlated with computational algorithms to determine specific personality traits of individuals. Using this information messaging was crafted and target it at psychological vulnerabilities of individuals to change their thinking or push them to action. source

The Troll Factory

On top of this paid targeting there are places called troll factories where armies of people engage in social interactions on social media and blogs. There is one of these in St. Petersburg called The Factory. People will engage with online content to discredit it with propaganda techniques such as “whataboutism”. This is where people contest valid facts by pointing out a perceived hypocrisy on the other side of the issue. They will also attempt to generate fake news to cause panic in a particular community.

Bots and Sock Puppets

These efforts combine with a computational technique of online bots, trolls and sock puppets. Sock puppets are online fake personas. They appear to be real people online often appearing as Americans but they just broadcast propaganda.

Bots are pieces of software that perform online tasks. They may just pickup other people’s messages on line and rebroadcast them. Other bots are more human like and engage in conversations. They often search for keywords in conversations and generate a “whatboutism” or some disparaging message to confuse readers. There are other bots that are semi-automated. Once these are challenged as being bots or have a question they cannot respond to, a human will intervene and manually provide responses.

Bot networks

These bots and sock puppet accounts often work as a team. They will gang up on conversations and re-broadcast each others messages. The goal is to make their message seem mainstream by its volume or popularity. They can confuse rational conversations and arguments online by misdirecting them and triggering emotional responses.

Suppression of Speech Bots

Recently bots have been suppressing messaging from people who are attempting to get out messages to counter disinformation. In the case of Brian Krebs who was attacking the messaging of bots that support Putin they tricked Twitter into disabling Brians account. They did this by causing thousands of accounts to follow him and then retweet his tweets in mass. This effort caused Twitter to automatically assume that Brian was paying bots to promote his account. So they turned it off.

Artificial Intelligence

Artificial intelligence and machine learning are additional computation techniques being weaponized in this information battle. Not only is AI used to automate bots to appear human but it also can to create messaging content. Artificial intelligence processes source documents or training data. Then programmers configure the system to output new messages. It multiplies the efforts of a human to generate new content. This output is then curated or even tested out online. This validates the effectiveness of the content. This feedback loop is used to create more effective triggers for people.

Attacking the Disenfranchised

Often these efforts to trigger action leverage marginalized groups in society. The bots and troll factories can take domestic content and amplify it for their own purposes. It has been shown that Russian based bots often rebroadcast messages that attempt to increase divisions in US citizens. Hamilton68 illustrates these efforts.

Screen Shot 2017-09-16 at 10.43.50 AM.png
This is a dashboard that tracks known Russian bots and exposes what they are promoting. Often this is anything that breeds mistrust in the US government and pits groups against each other.

Some countries also invest in traditional media like newspapers,radio and television stations to broadcast messages. They attempt to make this look like real news but it is actually disinformation and propaganda. This “news” is picked up as legitimate by other news outlets and rebroadcast. People who buy into the messaging with use the source as proof of their opinions.

Conspiracy Theories

Propagandists often use current running conspiracy theories to oppose competing messages or true news. They neutralize the opposition by coming up with a secret conspiracy or amplifying one in circulation.

Live events

Recently The DailyBeast reported that Russian operatives organized and promoted rallies on Facebook. This illustrates the purpose of propaganda which is to move people to action. Once they are acting on the beliefs pushed by the propaganda it can tip into political action.


At the moment in the US propaganda from Russia seems aimed at changing the foreign policy of the US government. The Russians have secured the information space of their citizens in an authoritarian way that is not acceptable in American society. And they have leveraged the lack of privacy controls in the American capitalist system, where information about people is sold for marketing purposes. It seems that Russian propaganda efforts are only aimed at the destruction of democracy or western values according to Chris Zappone of The Age

Computational Countermeasures

There are hundreds of ideas on how to counter this online propaganda. Some are government policies, industry self policing and educational programs. But I want to focus on countermeasures that are computational. In effect an attempt to fight fire with fire.


One way to minimize the impact of propaganda is to have tools that alert an individual of the event. Another type of computational tool is one that allows a community to monitor what others in their community are being targeted with. This can prevent individual weaknesses from being exploited. There are efforts underway in this space but there are opportunities for continual improvement.

Current concepts

Ideas from Volume

  • A dark advertisement exposure network. Volunteers install browser agents to gather ads and put them in a public searchable database with the targeting criteria. Could also use fake personas to collect advertisements.
  • Public searchable databases of bots and sock puppets identified by computational techniques, such as time of day analysis, linguistic patterns and rebroadcasting behaviours.
  • The bot collection database would also hold the relationships between accounts, messages and motives.
  • Computational methods software package to identify bots that pretend to be human and out them as bots on the social media
  • Browser plug-in that will show a user the motives of a bot and expose a network of bots that link or help each other. It enables a person to ignore and discount ideas coming from that entity.

In a blog post Colin Crowell, Twitter’s vice president of public policy, government and philanthropy, said in a blog post that Twitter itself is a “powerful antidote to the spreading of all types of false information” and “journalists, experts and engaged citizens Tweet side-by-side correcting and challenging public discourse in seconds.”

The issue with this is that the bots are programmatically generating information using artificial intelligence or have armies of cheap labor. How can citizens and journalists keep up with researching and debunking half truths, obvious lies and nonsense?

Perhaps we need to build counter bot networks. Of course, ones that fit into the social media companies terms of service.

Bot Combat

(more ideas from Volume Integration)

  • Bots that disrupt bot networks by sending trigger words to them and keep them busy and away from meaningful conversations.
  • Bots that look for disinformation and hoaxes and broadcast information to debunk it
  • Artificial intelligence social bots that can automatically research messaging from propaganda bots and counter the messaging
  • Crafting fact checking messaging to target back at organizations running troll factories and bot networks
  • A force of vetted volunteers that could perform analysis tasks to find bots and propagandists and then write counter arguments to them


Of course this rests on the concept of analysis so we would need tools to visualize the data and perform the human effort needed to find relationships and motives of the actors. Some of this effort would be utilizing the many algorithms already designed to detect bots and online propaganda. In addition Volume Integration has a tool that can help monitor and alert on social media account activity and messaging. See our products.

At the end of the post is a list of papers on methods and analysis techniques to find automated bots online.

Prevention (CyberSecurity)

One factor in recent propaganda has been the ability for bad actors to obtain classified or private information. In this case we need better cyber security is needed. Sometimes the information is gathered via social engineering. Outside actors manipulate a person inside an organization into providing the information or access to the computer systems.

Ideas from Volume

  • Separate internal corporate networks from the internet
  • Increase Cybersecurity methods and policies (patching schedules, inventory control, multiple factor authentication, firewalls, packet inspection, audits)
  • Trust / Risk Verification Systems like Volume Analytics which monitor events on a computer network to alert on unauthorized or risky behaviour.


I am afraid that there is no real collusion. We are at the beginning of our abilities to counter computational propaganda. It is going to be an arms race as tactics and systems change. Technology and technique will breed more technology and technique. I hope we are able to separate out the false information and come a bit closer to truth. In the end, it will be the next phase of human conflict and manipulation to gain power and wealth over personal freedom.

Contact us if you want to know more about our work or follow us on Twitter, LinkedIn or Facebook.

Current Studies on Automating Analysis


Ryft and Apache Zeppelin


Ryft is an FPGA – (field programmable gate array) appliance that allows for hosting and searching data quickly. In this post I will show one way to connect up Apache Zeppelin for use in data analysis using Scala code. Previously I showed how to connect Apache Zeppelin to SAP Hana.

The Ryft can quickly search structured and unstructured data without needing to build an index. This ability is attributed the the FPGA that can filter data on demand. It uses the internal 4 FPGA modules to process the data at search time. Other types of search systems like ElasticSearch, solr, Lucine or a database have to build and store an index of the data. Ryft operates without an index.

Ryft Speed Comparison

Ryft Speed Comparison

I have populated my Ryft with a cache of data from Enron. It is a dump of Enron emails obtained from Carnegie Mellon. This was as simple as uploading files to the Ryft and running a command like this:

ryftutil -copy “enron*” -c enron_email -a ryft.volumeintegration.com:8765

In the Zeppelin interface I will be able to search for keywords or phrases in the email files and display them. The size of the enron e-mail archive is 20 megabytes.

Ryft One Appliance

Ryft One Appliance

Apache Zeppelin

Apache Zeppelin is an open source web notebook that allows a person to write code in many languages to manipulate and visualize data.

Apache Zeppelin with Volume Analytics Interface

Apache Zeppelin with Volume Analytics Interface

To Apache Zeppelin work with Ryft I installed Apache Zeppelin onto the Ryft appliance and connected the Spark Ryft Connector jar found at this git project. Or download a prebuilt jar.

Follow the directions provided at the spark-ryft-connector project to compile the jar file needed. I compiled the jar file on my local desktop computer. Place the spark-ryft-connector jar file onto the Ryft machine. I did run into one that was not documented; the ryft connector was not working properly. It gives the error: “java.lang.NoClassDefFoundError: org/apache/spark/Logging”

I resolved the issue by downloading spark-core_2.11-1.5.2.logging.jar from  https://raw.githubusercontent.com/swordsmanliu/SparkStreamingHbase/master/lib/spark-core_2.11-1.5.2.logging.jar and put it in zeppelin/interpreter/spark/dep directory and that resoved the issue.

Now you can create a note in Zeppelin. I am using the Spark interpreter which allows you to write the code in Scala.

First you have to make sure Zeppelin can use the ryft code in the jar file. Make a dependency paragraph with this code:


Ryft Query

Now make a new paragraph with the code to make form fields and run the Ryft API commands to perform a search. Figuring these queries out takes a detailed study of the documentation.

These are the commands to prepare and run the query. I show a simple search, a fuzzy hamming search and a fuzzy edit distance search. The Ryft can perform very fast fuzzy searches with wide edit distances because there is not an index being built.

Simple Query
queryOptions = RyftQueryOptions("enron_email", "line", 0 toByte)
query = SimpleQuery(searchFor.toString)
Hamming Query
queryOptions = RyftQueryOptions("enron_email", surrounding.toString.toInt, distance.toString.toByte, fhs)
Edit Distance Query
queryOptions = RyftQueryOptions("enron_email", "line", distance.toString.toByte)
The Search
var searchRDD = sc.ryftRDD(Seq(query), queryOptions)

This produces an RDD that can be manipulated to view the contents using code like the example below.

searchRDD.asInstanceOf[RyftRDD[RyftData]].collect.foreach { ryftData =>
   println(ryftData.data.replace("\n", " "))

The Result in Zeppelin

Result of Searching Ryft with Zeppelin

Result of Searching Ryft with Zeppelin

In addition I have included code that allows the user to click on Show File to see the original e-mail with the relevant text highlighted in bold.

Results in BoldI installed Apache Zeppelin in a way that allows it access to a portion of the file system on the server where I stored the original copy of the email files.

In order for Apache Zeppelin to display the original email, I had to give it access to the part of the filesystem where the original emails were stored.  Ryft uses a catalog of the emails to perform searches, as it performs better when searching fewer larger files than more smaller ones. The catalog feature allows it to combine many small files into one large file.

The search results return a filename and offset which Apache Zeppelin uses to retrieve the relevant file and highlight the appropriate match. 

In the end results Ryft found all instances of the name Mohammad with various spelling differences in 0.148 seconds in a dataset of 30 megabytes. When I performed the same search terms on 48 gigabytes of data it ran the search in 5.89 seconds. And 94 gigabytes took 12.274 seconds, 102 gigabytes took 13 seconds. These are just quick sample numbers using dumps of many files. Perhaps performance could be improved by consolidating small files into catalogs.

Zeppelin Editor

The code is edited in Zeppelin itself.

Code in Zeppelin

Code in Zeppelin

You edit the code in the web interface but it can hide it once you have the form fields working. Here is the part of the code that produces the form fields:

 val searchFor = z.input("Search String", "mohammad")
 val distance = z.input("Search Distance", 2)
 var queryType = z.select("Query Type", Seq(("1","Simple"),("2","Hamming"),("3","Edit Distance"))).toString
 var surrounding = z.input("Surrounding", "line")

So in the end we end up with the following code.

import com.ryft.spark.connector._
import com.ryft.spark.connector.domain.RyftQueryOptions
import com.ryft.spark.connector.query.SimpleQuery
import com.ryft.spark.connector.query.value.{EditValue, HammingValue}
import com.ryft.spark.connector.rdd.RyftRDD
import com.ryft.spark.connector.domain.{fhs, RyftData, RyftQueryOptions}
import scala.language.postfixOps
import spark.implicits._
import org.apache.spark.sql.types._
import org.apache.spark.sql._
import scala.io.Source

def isEmpty(x: String) = x == null || x.isEmpty
  var queryOptions = RyftQueryOptions("enron_email", "line", 0 toByte)
  val searchFor = z.input("Search String", "mohammad")
  val distance = z.input("Search Distance", 2)
  var queryType = z.select("Query Type",("2","Hamming"), Seq(("1","Simple"),("2","Hamming"),("3","Edit Distance"))).toString
  var surrounding = z.input("Surrounding", "line")
  var query = SimpleQuery(searchFor.toString)

  if (isEmpty(queryType)) {
      queryType = "2"

  if (queryType.toString.toInt == 1) {
        if (surrounding == "line") {
            queryOptions = RyftQueryOptions("enron_email", "line", 0 toByte)
        } else {
            queryOptions = RyftQueryOptions("enron_email", surrounding.toString.toInt, 0 toByte)
        query = SimpleQuery(searchFor.toString)

  } else if (queryType.toString.toInt ==2) {
        if (surrounding == "line") {
            queryOptions = RyftQueryOptions("enron_email", "line", distance.toString.toByte, fhs)
        } else {
            queryOptions = RyftQueryOptions("enron_email", surrounding.toString.toInt, distance.toString.toByte, fhs)
  } else {
        if (surrounding == "line") {
            queryOptions = RyftQueryOptions("enron_email", "line", distance.toString.toByte)
        } else {
            queryOptions = RyftQueryOptions("enron_email", surrounding.toString.toInt, distance.toString.toByte)

  var searchRDD = sc.ryftRDD(Seq(query), queryOptions)
  var count = searchRDD.count()

  print(s"%html <h2>Count: $count</h2>")

  if (count > 0) {
        println(s"Hamming search RDD first: ${searchRDD.first()}")
        print("%html <table>")
        println("function showhide(id) { var e = document.getElementById(id); e.style.display = (e.style.display == 'block') ? 'none' : 'block';}")

        searchRDD.asInstanceOf[RyftRDD[RyftData]].collect.foreach { ryftData =>
            print("<tr><td style='width:600px'><a href=javascript:showhide('"+ryftData.file+"')>Show File </a></td>")
            val x = ryftData.data.replace("\n", " ")
            print(s"<td> $x</td></tr>")
            println("<tr id="+ ryftData.file +" style='display:none;'>")
            println("<td style='width:600px'>")

            val source = Source.fromFile("/home/ryftuser/maildir/"+ryftData.file)
            var theFile = try source.mkString finally source.close()
            var newDoc = ""
            var totalCharCount = 0
            var charCount = 0
            for (c <- theFile) {
                charCount = charCount + 1
                if (totalCharCount + charCount == ryftData.offset) {
                    newDoc = newDoc+"<b>"
                } else if (totalCharCount+charCount == ryftData.offset+ryftData.length+1) {
                    newDoc = newDoc+"</b>"
                newDoc = newDoc+c
            print(newDoc.replace("\n", "<br>"))
            totalCharCount = totalCharCount + charCount

So this should get you started on being able to search data with Zeppelin and Ryft. YOu can use this interface to experiment with the different edit distances and search queries the Ryft supports. You can also implement additional methods to search by RegEx, IP addresses, dates and currency.

Please follow us on Facebook and on twitter at volumeint.

Cat Pulling Wires

Cats Love the Internet Of Things

Cat Pulling Wires on Internet of Things

Cat Pulling Wires on Internet of Things

Our cat loves the internet of things. In general Luna loves stealing little bits of things. She stole the cable installers coax cable connectors and the HVAC repair persons wire connectors. Now she is into the internet of things. These things meet her specifications they are small, easy to carry in one’s mouth and bonus there are wires to pull out.

Very small cheap computers are proliferating and are allowing companies and people to put sensors into almost anything. These things can collect many types of data and transmit it to be processed and analyzed. Artists are even using them to create new art forms. One artist; Lesia Trubat, put sensors into ballet slippers to create visual abstractions of dance steps.


Recently Structure sent me a free Internet of Things (IoT) kit to use with their IoT Developer Platform that has an Arduino based chip at the core. It came with the Adafruit Feather Huzzah, which is a cool little thing that supports Arduino code and has WiFi built in, jumper wires, breadboard, temperature sensor, led, resistors and a micro USB cable.

Internet of Things Board

The instructions have step by step instructions on how to register the “thing” Huzzah with the Structure development platform, how to attach the components and the code to load onto the chip.

When finished I had a thing that would broadcast its temperature, accept a command to turn on the LED and send a message when someone presses the button.


A major feature I can appreciate is the built in encryption. When you register your thing in the Structure IoT Platform it is assigned a unique id. Then you must create an access key and a secret token. This is used to encrypt any data communicated between the thing and the Structure IoT Platform. When the thing starts with the proper interface code it will connect to Structure with its unique information. This is how your application workflow can send commands to the device thru any firewalls. Other platforms I have used do not have this level of security built in and require payment to secure the data.

The Structure IoT Platform allows one to set up events that define what the device will trigger. It also allows you to define what types of data the device will send.

Simple Work Flow


This workflow watches to see if the temperature goes above 80 degrees fahrenheit and sends a text message if it does. The platform supports a full featured drag and drop programming interface. It is a little difficult for a programmer like me to rethink coding in a drag and drop interface but it does allow for non programmers to control and gather information from their things. A plus is that it supports geospatial features like Geofence to trigger events based on the location of your things.

Dashboard for Internet of Things

The platform also has a dashboard feature with the ability to build charts and graphs of the data coming from your things. With the workflow you can also build data aggregators that will sum up your data over time or combine data from multiple devices and store it for display in the dashboard.

My Temperature Dashboard

The dashboard and workflow are protected by individual user logins. But I wanted to expose my data to the internet so that you could see it here on this blog post.

Exporting Data

Structure has built into the workflow easy ways to push your data out of their platform into other systems. First you need some service to send your data to, a transport. I used the HTTP node to send my data to another IoT provider called dweet.io. This is a public service that charges to have your data secured, by default it becomes public. You just send your data to the url https://dweet.io/dweet/for/name_of_your_thing?parameter=value. Mine looks like https://dweet.io/dweet/for/structureIotKitBj?tempF=76.5 Dweet.io will store the last five data points for free. Be careful with this, if you do not want your data to be public.

Send data to dweet.io

Then I found a service that will display data in a public dashboard that integrates with dweet.io called freeboard.io. Freeboard allows you to build a display dashboard like Structure but it is public. You add your data feed you named on dweet.io into a datasource on freeboard.io.

This will enable the following dashboard to be seen by the world.

But if you want to protect your data the Structure Platform may be all you need. If you need custom features there are a multitude of ways in the platform to have it send data to your own database for deeper analysis. They even have directions on how to get your data into a Google spreadsheet that utilizes IFTTT (If This Then That). If you are looking to get started with the Internet of Things or have something more ambitious to build order a Huzzah and give Structure a try.

I just need to be more careful where I leave my small things that connect to the internet. I left my temperature measuring device in the window to see what kind of drafts we get. But Luna the cat decided to pull out a black ground wire. It caused the temperature report to go up to 200 degrees but I kept getting data. My next project might be connecting up a motion detector to this and see how it works in detecting our IoT loving cat.

Cat Pulling Wires

Follow this blog for more posts on IoT, robotics, Raspberry Pi and other technical topics. Be sure and check out some of our historical posts like I Code Like a Girl, War and Geospatial Mapping and one about computer history and art.

Build an Arduino Robot

Build an Arduino Robot

There is nothing like seeing a program you wrote interact with the physical world. For this lab experiment you can make and program your own robot. An Arduino robot.

I began with instructions for the micro robot from Greece but realized my soldering skills are lacking.

Greek Mini Robot

But it did give me an idea for the parts I would need to get a small form factor robot up and running. This led to a search of Amazon for applicable parts. I needed at least two motors, some model of Arduino, a L293D H-Bridge Motor Driver chip, some servos for additional capabilities and a platform for all the parts to attach to.

Here is what I ended up with.

Total cost was around $40.00.

Official Arduino Nano

I picked the nearly compatible generic Arduino Nano because of its built in USB support and small form factor and its price. I say nearly compatible because its USB driver chip is a cheaper version than what comes on an official Arduino. It requires special drivers that have to be installed on the computer you are using to program the Arduino. I used the article, How to Use Cheap Chinese Arduinos with the CH340G USB Chip On Yosemite and Windows. It will show how to install the driver and enable it so it works with the Arduino IDE where you can write your code.


My Cheap Arduino

To find the computer chip you would like to use consult this chart at the Arduino website.

To make sure things were working I connected the Nano to my computer, started up the Arduino IDE and loaded the blink example code or what they call a sketch. Then I uploaded it to the Arduino and watched the onboard LED blink on and off.

The next step, the exciting part, was to wire up the L293D chip to the Arduino.

RobotPlatform.com had a great explanation and pin diagrams of the L293D. But I used a video to really nail down how to attach the two parts together using two breadboards and jumper wires between the two.

In the video he maps out what pins to attach to on the Arduino. The pin numbers are written on the Arduino board.

But here is a diagram that I made that will help you line up the wiring pin by pin. First use the mini bread boards.

Looks like this when finished

I used one for the L293D motor driver chip and another for the Nano. You can use the jumper wires to push into the holes on the breadboard that line up with the pin you want to connect. You will need to attach more than one wire to some of the pins. This configuration will allow the battery pack to power the motors and the chip. Make sure you are supplying at least 5 volts. The VIN pin is for unregulated power so if your motors need more power just supply more voltage. The Nano has a built in voltage regulator.

  • Pin 8 -> VIN on Nano
  • Pin 1 -> Pin 16
  • Pin 9 -> Pin 16
  • Pin 16 -> 5v on Nano for power
  • Pin 4 -> Pin 5 for Grounding
  • Pin 12 -> Pin 13 for Grounding
  • Pin 4 -> Pin 13 for Grounding
  • Pin 12 -> Gnd on Nano
  • Pin 10 -> D5 on Nano controls right motor
  • Pin 15 -> D4 on Nano controls right motor
  • Pin 7 -> D7 on Nano controls left motor
  • Pin 2 -> D6 on Nano controls left motor
  • Pin 3 -> – on left motor
  • Pin 6 -> + on left motor
  • Pin 14 -> – on right motor
  • Pin 11 -> + on right motor
    • on battery pack -> VIN on Nano VIN is for raw power.
    • on battery pack -> GND on Nano

One way to wire it up


At some point after I had it all working I crossed the V5 wire with the ground and shorted out the Arduino. So be careful with that. I ended up finding a lower cost option and purchased three new Nanos. This way of wiring it up allows the Arduino and the motors to be powered from the same battery pack. I used the battery container that came with the chassis kit which supplies 6v.

Now it’s time to program the Nano and see what it can do. I used the code samples from the Arduino forum and modified it to work with the pins I selected for controlling the motors.

To program the Arduino get the IDE from Arduino.cc and install it. Follow the directions at How to use cheap Arduinos. In the IDE select the Tools menu and select the Board you have (in my case Nano), the processor (I have the ATamega328) and the port (mine shows as dev/cu.wchusbserial1420).


// Motor Controls
const int PWMELPin = 10; // Enable PWM Left Motor
const int PWMERPin = 11; // Enable PWM Right Motor
const int L1Pin = 6; // L1 = Left Motor Lead 1
const int L2Pin = 7; // L2 = Left Motor Lead 2
const int L3Pin = 4; // L3 = Right Motor Lead 1
const int L4Pin = 5; // L4 = Right Motor Lead 2
const int LEDPin = 13; // Telltale LED
const int Duration = 2000;
void setup()
   pinMode(PWMELPin, OUTPUT);
   pinMode(PWMERPin, OUTPUT);
   pinMode(L1Pin, OUTPUT);
   pinMode(L2Pin, OUTPUT);
   pinMode(L3Pin, OUTPUT);
   pinMode(L4Pin, OUTPUT);
   pinMode(LEDPin, OUTPUT);
//Main Loop
void loop()
   MotorL(1, 1);
   MotorR(1, 1);
   digitalWrite(LEDPin, HIGH);
   MotorL(0, 1);
   MotorR(0, 1);
   MotorL(1, 1);
   MotorR(0, 1);
// ***********************************************************************
// Motor Driver Routines**************************************************
// ***********************************************************************
// Function(Direction, Speed)
// Direction
// 1=Forward
// 0= Reverse
// Speed
// 1 = High/250
// 0 = Low/20
// ***********************************************************************
void MotorL(int Direction, int Speed)
// Left Motor
if (Speed == 1)
   analogWrite(PWMELPin, 250);
   analogWrite(PWMELPin, 20);
if (Direction == 1)
   digitalWrite(L1Pin, HIGH);
   digitalWrite(L2Pin, LOW);
   digitalWrite(L1Pin, LOW);
   digitalWrite(L2Pin, HIGH);
// ***********************************************************************
void MotorR(int Direction, int Speed)
// Right Motor
if (Speed == 1)
   analogWrite(PWMERPin, 250);
   analogWrite(PWMERPin, 20);
if (Direction == 1)
   digitalWrite(L3Pin, HIGH);
   digitalWrite(L4Pin, LOW);
   digitalWrite(L3Pin, LOW);
   digitalWrite(L4Pin, HIGH);
// ***********************************************************************

Upload this to your Arduino and see what happens. It should drive forward or backward for 3 seconds depending on which direction you attached the wires then it should stop turn quickly to the left and then go forward again. It will loop forever until you press the reset button or kill the power.

Let me know here how it works for you and follow the blog and twitter volumeint for future lab projects. Upcoming posts include IoT, Internet of Things and adding sensors to the Arduino robot.

A Better Web Server with Free SSL

A Better Web Server with Free SSL

In researching the best way to get our conversations on our Rocket Chat server encrypted I ran across the most innovative web server I have seen. In our previous posts on Rocket Chat on Raspberry Pi 2 I describe how to install it all but left the SSL configuration until now.

I found that the easiest way to get Rocket.Chat setup with SSL is to use a second web server. The Rocket Chat git repository had some directions on how to setup apache. But this left the problem of getting an ssl certificate.
Caddy made this so easy. Typically you can install it with apt-get caddyserver but Since I am deployed on Arch Linux on a Raspberry Pi it was more difficult. They have download packages to install on major operating systems. You need to do the following on Arch Linux on Raspberry Pi 2. Otherwise just running ‘sudo pacman -S caddyserver’ will do the trick.

curl -L -O https://aur.archlinux.org/cgit/aur.git/snapshot/caddy-git.tar.gz tar -xvf caddy-git.tar.gz cd caddy-git pacman -S fakeroot mv caddy-git /home/user/ makepkg -sri chown -R user:user caddy-git/

Modify Caddyfile

chat.yourServer.com proxy /

Then to start it up run:

caddy -conf="/home/user/Caddyfile" -email yourEmail@server.com -agree

The magic added bonus is that if you have port 80 and port 443 open Caddy will go get a Let’s Encrypt SSLcertificate and start running with it.

The Caddyfile directive file is very powerful and easy to configure. It is so much more flexible and understandable than apache conf files. The proxy command is what takes the users page requests from port 443 on Caddy and passes them through to port 3000 where Rocket Chat is running.

I can say that Caddy is my new favorite web server after many years of using Apache and Jetty.

I am encouraged to see free SSL certificates being offered. It always seemed that the price put on encryption for web sites was out of line with the work it takes to create an SSL certificate. These certificates verify the identity of a web host and encrypt all the data being looked at on a web page. My post on entropy outlines how easy it is to generate enough random data to generate certificates. Let’s Encrypt provides a simple and easy way to get and manage SSL certificates.

Subscribe to this page or follow us at volumeint twitter to be informed when new posts from Volume Labs are posted. Check out https://volumeintegration.com for more about Volume.

Installing Rocket.Chat on Raspberry Pi 2

Installing Rocket.Chat on Raspberry Pi 2

The goal is to get Rocket.Chat running on a Raspberry Pi 2. And what a crazy path it took me on.

Rocket.Chat is an open source chat server that is easy to use and has lots of features that support communication and sharing of links and files. It is going to be setup as a private chat platform for Volume Integration to increase collaboration.

Rocket Chat at Volume

I decided to start at the beginning with NOOBS and Raspbian OS. My research indicated that Rocket.Chat had been installed and run on a Raspberry Pi. Rocket.Chat requires node.js, npm, Meteor, MongoDB. I started by following some directions for installing Meteor. Then ran into major issues getting node and MongoDB to install. At this point I realized that I had a Pi 2 and that many popular packages did not have binaries for it. The newest MongoDB only runs on 64 bit architectures and the Pi is ARM which is 32 bit.

Raspberry Pi Logo

After much searching and compiling of different versions of MongoDb and Node, following the installation without docker instructions, the Rocket.Chat RockOnPi Community released Rocket Chat Raspberry Pi directions. They should have a build soon for the Raspberry Pi Zero. These directions worked with Raspbian. But they call for using mongolab.com for the mongo database. I could not find a build of MongoDB that worked on the Raspberry Pi 2.

But the goal was to get it all working on a single Raspberry Pi. There is a version of Linux called Arch Linux that has MongoDB 3.2. So the first step was to install it. This was a side adventure documented in Installing Arch Linux on Raspberry Pi 2. The major issue is that as of NOOBS 1.5 there was no support to install Arch Linux on the Raspberry Pi 2 using the NOOBS installer. This required me to write the package to the SDcard and boot from there.

Mongo DB

First I installed Mongo DB because it was the hardest part on Raspbian. There was not a build that would work on the Raspberry Pi 2 and support Rocket.Chat. I found instructions for an old version that did work but it was too old for Rocket.Chat.

One item of note with MongoDB on Raspberry Pi is that ARM is inherently a 32 bit OS. This means that MongoDB will support 2 GB database sizes. Sing Li who is a contributor to the Rocket.Chat project told me on their demo chat server,

“that’s by no means limitation for Rocket.Chat ???? a 2 GB mongodb database IS VERY LARGE ! For reference … this demo server with 38,000 registered users and close to 300,000 messages has a database that is less than 2 GB in size (for message storage). Hopefully the Pi server is expected to handle a little less.”

First the dependencies as root. In Arch Linux login to root. Arch Linux has the default user of alarm. If you are connecting via ssh login as alarm from there you can su root.

pacman -S npm
pacman -S curl
pacman -S graphicsmagick
pacman -S mongodb

This caused issues with incompatible versions. So I ran:

pacman -Syu mongodb

Now we need to make a data directory for Mongo.
mkdir /data/
mkdir /data/db

If your ‘/data/db’ directory doesn’t have the permissions and ownership above, do this:

First check what user and group your mongo user has:
# grep mongo /etc/passwd

You should have an entry for mongod in /etc/passwd , as it’s a daemon.
chmod 0755 /data/db
chown -R 498:496 /data/db # using the user-id , group-id
ls -ld /data/db/
drwxr-xr-x 4 mongod mongod 4096 Oct 26 10:31 /data/db/

The left side ‘drwxr-xr-x’ shows the permissions for the User, Group, and Others. ‘mongod mongod’ shows who owns the directory, and which group that directory belongs to. Both are called ‘mongod’ in this case.

Now try and start mongodb to see if it works. On 32 bit architectures you must start it with the mmapv1 storage engine.
mongod --storageEngine=mmapv1

In theory you should enable mongodb so it will startup on boot.

Modify the /usr/lib/systemd/system/mongodb.service file with the storage engine settings.
systemctl enable mongodb.service

But having it run as a service caused issues when starting Rocket.Chat. Rocket Chat says that the database driver version 2.7 is incompatible. So for now I run it as a regular user with the mongod command. To have it continue running on logout install screen.

pacman -S screen screen mongod --storageEngine=mmapv1

Some of these following directions are based on https://github.com/RocketChat/Rocket.Chat.RaspberryPi

Meteor and NPM install

The easiest way to get both is to clone from the Meteor universal project.

As a user that is not root follow this:
cd ~ git clone --depth 1 https://github.com/4commerce-technologies-AG/meteor.git


$HOME/meteor/meteor -v

Rocket Chat Install

I received some great help and encouragement from the Raspberry Pi community on the Rocket.Chat chat site.

You do not need to be root to perform this step.

Download the Rocket.Chat binary for Raspberry Pi

cd $HOME
mkdir rocketchat
cd rocketchat
curl https://cdn-download.rocket.chat/build/rocket.chat-pi-develop.tgz -o rocket.chat.tgz
tar zxvf rocket.chat.tgz

This will download and untar the app in $HOME/rocketchat

After some trial and error I discovered that some dependencies were needed. make, gcc and python2. Root must run pacman.
pacman -S python2
pacman -S make
pacman -S gcc

Now try the install procedure but use python 2.7 and –save will show any errors that happen. I used –save to figure out that I did not have gcc (g++) installed.

cd ~/rocketchat/bundle/programs/server

~/meteor/dev_bundle/bin/npm install --python=python2.7 --save

Testing to make sure it works

export PORT=3000

export ROOT_URL=http://your url or ip address:3000 export

export MONGO_URL=mongodb://localhost:27017/rocketchat

$HOME/meteor/dev_bundle/bin/node main.js


To keep session running after logout in Arch Linux use:
loginctl enable-linger alarm

Run on Startup

Unfortunately I have been unable to get Rocket Chat to recognize MongoDB when Mongo is running as a service on Arch Linux. It says that the version of the database driver is not compatible. For now I start Mongo up as a user and place it in the background.

Next Steps

The next steps are to configure the Rocket Chat server to startup on boot and run on SSL. We want to protect those chats flowing between our employees. Follow this blog and volumeint twitter to find out about the next posts on how to install Arch Linux on Raspberry Pi 2 and how to get a free SSL certificate for your chat server.

Raspberry Pi and Installing Arch Linux

Raspberry Pi and Installing Arch Linux

Getting Arch Linux installed on a Raspberry Pi 2 was a challenge for me. Hopefully this post will help you set it up.

Arch Linux is a very robust Linux version. It only installs the base default packages to get running. Everything else must be installed as it is needed. You may find yourself needing to run the package installer often. (pacman -S) It was also nice that the network interface was setup with DHCP by default. All I had to do after install was to plug it into an ethernet cable.

In the process of a successful attempt to get Rocket.Chat and MongoDB running on the Raspberry Pi 2 from CanaKit I found that the only way I could get the correct version of MongoDB 3.2 that works with Rocket.Chat was to install Arch Linux. I outlined how I installed Rocket.Chat in another post called Installing Rocket.Chat on Raspberry Pi 2.

The major issue is that as of NOOBS 1.5 there was no support to install Arch Linux on the Raspberry Pi 2 using the NOOBS installer. NOOBS does display the option to install Arch Linux but it would then tell me that it could not find the right version of Arch Linux for Raspberry Pi 2. This required me to write the package to the SDcard and boot from there.

All the directions I could find indicated that the installation of Arch Linux boot files could be done from a computer. Unfortunately the computer had to be Linux but not Macintosh OS. I have a Mac and a Windows computer.

So I started up my handy Oracle VirtualBox running Ubuntu. But now I learn that VirtualBox and Ubuntu do not support the SD card port on the Macintosh with help from Mayur Rokade. He suggested using cell phone to connect as a USB device.

Old Smartphone as USB drive

I could not get this to work but then I found I had a usb to SD card adaptor from a Eye-Fi card.

SD Card USB Adapter

But Ubuntu still would not see the device. So a post from mayurrokade.com was helpful. I ended up having to “eject” the USB device from Finder. Then start Ubuntu in VirtualBox. Then it started appearing. Ubuntu did not see it until the Mac released it.

Now following the official directions at archlinuxarm.org I continued the setup.
Replace sdX in the following instructions with the device name for the SD card as it appears on your computer. To see what your device gets named use the command in the Ubuntu terminal.

sudo lsblk

This will display the names of the drives that are attached. On my computer it was: sdb

Start fdisk to partition the SD card:
fdisk /dev/sdX
At the fdisk prompt, delete old partitions and create a new one. This will delete all data on the SD card.
Type o. This will clear out any partitions on the drive.
Type p to list partitions. There should be no partitions left.
Type n, then p for primary, 1 for the first partition on the drive, press ENTER to accept the default first sector, then type +100M for the last sector.
Type t, then c to set the first partition to type W95 FAT32 (LBA).
Type n, then p for primary, 2 for the second partition on the drive, and then press ENTER twice to accept the default first and last sector.
Write the partition table and exit by typing w.
Create and mount the FAT filesystem. These commands will create a folder called boot and root in whatever directory you are in. I recommend using /home/<username>. One partition is for booting up and has to be a FAT partition. The Root partition which is bigger needs to be a ext4 partition.

mkfs.vfat /dev/sdX1 mkdir boot mount /dev/sdX1 boot

Create and mount the ext4 filesystem:

mkfs.ext4 /dev/sdX2 mkdir root mount /dev/sdX2 root

Download and extract the root filesystem (as root, not via sudo):

wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-2-latest.tar.gz

You will need to install the bsdtar package with pacman:

pacman -S bsdtar

This will expand the file and put Arch Linux in the root folder which is on the SD card.

bsdtar -xpf ArchLinuxARM-rpi-2-latest.tar.gz -C root sync

Move boot files to the first partition called boot:

mv root/boot/* boot
Unmount the two partitions:
umount boot root

Insert the SD card into the Raspberry Pi, connect ethernet, and apply 5V power.
Use the serial console or SSH to the IP address given to the board by your router.
Login as the default user alarm with the password alarm.
The default root password is root.

Follow this blog and volumeint twitter to find out about the next posts on how to get a free SSL certificate for your chat server and future post on functional programming.

Volume Integration Sponsors Education with Robots

Volume Integration Sponsors Education with Robots

We humans are fascinated with robots since at least the 10th century BC when Yan Shi is recorded to have created a human like automaton. Now we teach about robotics beginning in elementary school. Two popular technologies in our area are the First Lego League and the VEX Robotics Competition

Marsteller Robotics

The competition consists of moving objects into a goal with 15 seconds of autonomous operation followed by a 1 minute and 45 seconds of remote control driving. In addition the robots are paired up with a second robot to work as a team.

This process teaches the competitors social and teamwork skills in the process of building something that requires them to learn and exercise math, science, engineering and technology (STEM) skills.


The robots are programmed in a language specifically created for education and robots called RobotC. It is an altered version of C. This sample from the programmer, Elizabeth, on team 14g from last year shows the autonomous sequence.

#pragma config(Motor, port7, claw, tmotorVex393_MC29, openLoop)
void clawopen (int time)
task autonomous()
driveforward (400);
armup (1600);
clawopen (1000);
armup (1600);
clawopen (1000);
view rawautonomous.c hosted with ❤ by GitHub

A series of functions that the team programmer Elizabeth wrote, such as clawopen, armup and turnright are executed. These functions call internal robot functions that control motors and read from sensors. This snip of code shows the clawopen function which references the claw motor defined in the pragma at the top. The config ties the port number on the cortex to a specific motor type.


The VEX competition requires the use of standard parts so teams are working from the same base. The brain of the robot is an item they call the cortex. It is an ARM processor with ports for motor and sensor connections. There is a joystick to control the robot and then there are sensor and motor modules. In addition there are collections of gears, wheels, belts and structural pieces to build the body.

Optical Shaft encoder to calculate distance traveled

Vex Motor 393



Each robot is built by a team of four to six students working together to design, build and program it. They also must work together during the competitions in determining strategy and managing their time as they sometimes need to make adjustments between matches.

Between matches they compete in individual events where they are judged on programming skills and robot driving skills.

Robotics Marsteller

During the preliminaries they are teamed with another robot team. So all the matches are two against two. They learn to problem solve under pressure and learn to work with the strengths and weaknesses of the random parings over many matches.

In the semi-finals of the tournament their math skills are tested as they determine the best teams to align with during the semi-finals. The top teams pick two other teams they want to compete with in the finals. It is a loud dash as teams negotiate and jockey for positions.

In the end there are multiple winners. They select the most inventive robot, the tournament champions, the second place team, programming skills winner, robot skills winner, design (quality of their engineering notebook) and judges award.

The best teams during the season go to the Virginia State competition and then the top teams go to a world championship.

Volume Integration and Volume Labs is proud to sponsor this fun and educational technological sport. Be sure and check back for updates on the progress of the Marsteller Team.

What Would You Make With The Worlds First $9 Computer_

What Would You Make With The Worlds First $9 Computer?

I bought a new computer today.

A tiny $9 computer.

It is being funded using Kickstarter. In order for Next Thing Co. to get down to the $9 price point they needed to order in bulk. At the point this post is written they have over 1.7 million dollars and 33,000 backers. Unfortunately they are not projecting the first shipment until December 2016.

In the tradition of Volume Labs I want to document the experiments we build with it. It is called CHIP and has features similar to the Raspberry Pi 2 but its cost is $9 verses Pi’s $35. It has a 1 GHz processor, 512 MB RAM and 4 GB storage. It comes with WiFi and Bluetooth built in with a display port, audio, microphone and USB port.

It will run mainline Linux which means we can get some code working on it right away.

CHIP is OpenSource hardware which should allow folks to build new attachments much like the Raspberry Pi has. It has a series of pins that allow for attachment of extra features like HDMI, VGA or LCD touch screens. There is support for camera sensors which would be needed for some of our experiments.

Small form factor with many interfaces

Now since we have so much time to wait for CHIP to come we are making a list of ideas. Here are a couple of ideas we are thinking about pursuing? What would you make?

  • An artwork display system that will display art, animated gifs and videos on a large screen.
  • A twitter art bot that will create artwork based on instructions from @artassignbot. It will parse the words, perform entity resolution, search the internet for images, sounds, videos and mix them into art of the genre the Art Assign Bot commands. The art will be posted to twitter.
  • Connect a camera that is programmed to take pictures when a face is detected and display it on a large screen.
  • Monitor activity on the network it is attached to.
  • Scan the network it is attached to and catalog the results.
  • Implement a VPN with it
  • Continuous integration system that will display software build and test status on a TV
  • An installation artwork titled “Still Haven’t Found It.” that randomly follows links on the internet and displays them on an old television. As it finds video content it will play it for a random amount of time.
    Early next year I will begin writing posts about my experience with CHIP and show off what we are making.

Check out more of our work at Volume Integration and follow us on Twitter.