Metron Docker is a Docker Compose application that is intended only for development and integration testing of Metron. These images can quickly spin-up the underlying components on which Apache Metron runs.
None of the core Metron components are setup or launched automatically with these Docker images. You will need to manually setup and start the Metron components that you require. You should not expect to see telemetry being parsed, enriched, or indexed. If you are looking to try-out, experiment or demo Metron capabilities on a single node, then the Vagrant-driven VM is what you need. Use this instead of Vagrant when:
- You want an environment that can be built and spun up quickly
- You need to frequently rebuild and restart services
- You only need to test, troubleshoot or develop against a subset of services
Metron Docker includes these images that have been customized for Metron:
- Kafka (with Zookeeper)
- HBase
- Storm
- Elasticsearch
- Kibana
- HDFS
Setup
Install Docker for Mac or Docker for Windows. The following versions have been tested:
Driver Metron Innovative System (Pvt.) Ltd. Lahore, Pakistan. Posted Sep 05, 2019 376 views.
3.2.4 Displays and Drivers The LED displays, 7 segment, bargraph and individual, are all driven by MM5451 display drivers, IC15 - 17, IC19 and IC20. These are two wire devices which connect directly to the microprocessor output ports. ICs15-17 are driven from IC8 and the clock line is shared between all the drivers. Metron™ is a proprietary Telematics system developed by Workhorse. This multi-platform application allows you to track and monitor the performance of all Workhorse vehicles in your fleet and collect the data you need to optimize route planning and monitor maintenance and fuel costs throughout the life of your vehicles.
- Docker version 1.12.0
- docker-machine version 0.8.0
- docker-compose version 1.8.0
Build Metron from the top level directory with:
You are welcome to use an existing Docker host but we prefer one with more resources. You can create one of those with this script:
This will create a host called “metron-machine”. Anytime you want to run Docker commands against this host, make sure you run this first to set the Docker environment variables:
If you wish to use a local docker-engine install, please set an environment variable BROKER_IP_ADDR to the IP address of your host machine. This cannot be the loopback address.
Usage
Navigate to the compose application root:
The Metron Docker environment lifecycle is controlled by the docker-compose command. The service names can be found in the docker-compose.yml file. For example, to build and start the environment run this command:
After all services have started list the containers and ensure their status is ‘Up’:
Various services are exposed through http on the Docker host. Get the host ip from the URL property:
Then, assuming a host ip of 192.168.99.100, the UIs and APIs are available at:
- Storm - http://192.168.99.100:8080/
- HBase - http://192.168.99.100:16010/
- Elasticsearch - http://192.168.99.100:9200/_plugin/head/
- Kibana - http://192.168.99.100:5601/
- HDFS (Namenode) - http://192.168.99.100:50070/
The Storm logs can be useful when troubleshooting topologies. They can be found on the Storm container in /usr/share/apache-storm/logs.
When done using the machine, shut it down with:
Examples
Deploy a new parser class
Drivers Matrix
After adding a new parser to metron-parsers-common, build Metron from the top level directory:
Then run these commands to redeploy the parsers to the Storm image:
Connect to a container
Suppose there is a problem with Kafka and the logs are needed for further investigation. Run this command to connect and explore the running Kafka container:
Create a sensor from sample data
A tool for producing test data in Kafka is included with the Kafka/Zookeeper image. It loops through lines in a test data file and outputs them to Kafka at the desired frequency. Create a test data file in ./kafkazk/data/ and rebuild the Kafka/Zookeeper image:
This will deploy the test data file to the Kafka/Zookeeper container. Now that data can be streamed to a Kafka topic:
The Kafka/Zookeeper image comes with sample Bro and Squid data:
Upload configs to Zookeeper
Parser configs and a global config configured for this Docker environment are included with the Kafka/Zookeeper image. Load them with:
Dump out the configs with:
Manage a topology
Driver Metrics Login
The Storm image comes with a script to easily start parser topologies:
The enrichment topology can be started with:
The indexing topology can be started with:
Topologies can be stopped using the Storm CLI. For example, stop the enrichment topology with:
The package provides the installation files for Microsoft USB Input Device Driver version 10.0.4. If the driver is already installed on your system, updating (overwrite-installing) may fix various issues, add new functions, or just upgrade to the available version. Ivt input devices driver download for windows 10.
Drivers Metronome
Run sensor data end to end
First ensure configs were uploaded as described in the previous example. Then start a sensor and leave it running:
Open a separate console session and verify the sensor is running by consuming a message from Kafka:
Driver Metronic
A new message should be printed every second. Now kill the consumer and start the Bro parser topology:
Bro data should be flowing through the bro parser topology and into the Kafka enrichments topic. The enrichments topic should be created automatically:
Verify parsed Bro data is in the Kafka enrichments topic:
Now start the enrichment topology:
Parsed Bro data should be flowing through the enrichment topology and into the Kafka indexing topic. Verify enriched Bro data is in the Kafka indexing topic:
Driving Metro
Now start the indexing topology:
Driver Metric
Enriched Bro data should now be present in the Elasticsearch container: