This is a complete example how to start the platform, create a pipeline that writes data to Kafka, create another pipeline that reads data from Kafka and writes text to a file.
NoteNOTE: For an easier introductory example of setting up a Coral pipeline, look here.
In this tutorial, we will create the following setup:
Kafka
We put data on Kafka ourselves so we can read from it in the second pipeline. Obviously, this example is a bit contrived, but it demonstrates how to read and write from Kafka in the format we need.
As before, we assume that you have downloaded and extracted the Coral platform on your machine, and that Cassandra is running. In this tutorial, we assume that you use the latest version of the Coral platform. We will also assume that you use curl to send commands to Coral. As stated in the section Prerequisites, however, you can use any HTTP client you want.
To get Kafka running on your machine, download it here.
Download the latest version in .tar.gz format and extract it into a folder on your machine.
Assuming you have downloaded version 0.9.0.0, execute the following commands:
We now have a Kafka topic “test” running which can be written to at localhost:2181.
We will assume that you have created a user “neo” as in the previous “Hello, world” example.
Start the platform
To start the platform, enter the following command:
To test whether the platform runs correctly, issue the following command:
The platform will respond with
This is a JSON array showing the runtimes currently running on the platform. As there are no runtimes on the platform yet, it returns an empty array.
Set up the first runtime
We will set up a runtime with a generator actor and a kafka producer actor that writes the generated data to file. The definition of the runtime is as follows:
This is the first part of the pipeline that generates data and sends it to Kafka. This runtime generates the text “Hello, world!” 10 times per second and writes the output to the Kafka topic “test” until it is stopped.
To create the runtime, send the following command:
The platform responds by returning the following information:
The UUID’s and the created time in your response may vary from the ones shown here.
The runtime has now been created.
Start the first runtime
The runtime is now created but is not started yet. To start the runtime, issue a PATCH commmand as follows:
The platform responds with:
The start time in your response may vary from the one shown here. The runtime is now started and will generate a “Hello, World!” every second. You can check with the following command that the data actually arrives in Kafka (this command should be executed in the Kafka directory):
Create the second runtime
To listen to the Kafka events that we are generating, we create a second runtime.
To do this, we are going to create another runtime with the following definition:
To create this second runtime, issue the following command:
Start the second runtime
To start the second runtime, issue the following command:
Investigate the output
The file “/tmp/runtime2.log” should not contain the following:
And that’s it! Our “Hello, world” example using Kafka.