39th meetup on Apache Kafka and Apache Ignite

Winter is coming, so we are picking up our best habits again by organising our 39th BigData.be meetup. We had 2 great presentations lined up for you on Apache Kafka and Apache Ignite. Read on!

Summer is already a long time ago, but so were our BigData.be meetups! Our mistake …

But “Winter is coming!” So we picked up our best habits again. The 39th meetup was on Tuesday, November 8th, 2016 at 19h30.

Cegeka was so kind to offer their magnificent aula at their Hasselt campus for our venue. Thank you, Rutger Claes for organising this!

The attendance this time was a cozy bunch of people! It turned out to be a most interesting and interactive meetup.

Really interesting talks by Daan and Mathias; strongly recommended.
Bjorn

Streams++, A complete streaming framework using Apache Kafka

Daan Gerits, CTO at BigBoards

Daan introduced the most important components in the Apache Kafka ecosystem.

Build apps, not jobs

That quote eloquently compares Kafka to the other Big Data application patterns.

The Apache Kafka ecosystem showing Kafka at the center of course, surrounded by from left to right Kafka Security, Schema Repo, Kafka Proxy, Kafka Connect and finally Kafka Streams
The Apache Kafka ecosystem

What is Apache Kafka?

Kafka is a distributed streaming platform. “A message broker with a twist.” What does that mean? Producers put messsages on Topics, whereas Consumers process messages from Topics sequentially. A message is a simple byte array, but has a timestamp, key and value. On the other hand, the Topics themselves are more like datastores: they are persisted to disk to keep the messages available. Partitioning and replication make Topics high available. Kafka is fast!

Kafka Connect is a framework that integrates Kafka with other systems. It purpose is to make it easy to add new systems to your stream data pipelines. Source Connectors import data from another system by putting them as messages on a Topic. Sink Connectors on the other hand read data from a Topic to a target system.

Kafka Streams is an open-source solution to build and execute powerful stream processing functions. If you use Kafka for stream data transport, Kafka Streams can immediately add stream processing capabilities. Kafka Streams doesn’t even need a separate computer cluster for distributed processing! A Stream reads messages from a topic, transforms it and puts it on another topic.

What is Apache Kafka’s appeal?

Apache Kafka is elegantly designed using only simple constructs:

  • Connect and Streams are just libraries.
  • Connect and Stream apps are just simple apps or processes.
  • Apps are super easy to deploy.
  • Just spawn many instances of the same app for more throughput.
  • Kafka apps and orchestration tools are a match made in heaven. Think of Mesos DC/OS, Kubernetes, Docker Swarm and the like.

Here is Daan’s slidedeck …

Sequence data at warp speed with Apache Ignite

Mathias Lavaert, Data Engineer at Dataminded

Mathias and his team had to crack a challenging customer problem:

How to do interactive querying on sequence data at scale?!

We all know that sequential data and the processing thereof are hard to deal with. Typical use case:

  • Sales history of a customer in retail business
  • Browser activity of a visitor on a website
  • Events generated by a sensor in IoT
  • DNA sequence of a particular species in genomics

Typical operations on sequential data are: align, diff, down sample, outlier, min/max, avg/med, slope, ….

Let’s try to put Apache Spark to work?

The first plan of attack was to put Apache Spark straight to work. But the typical layout of the sequence by timestamp is far from ideal for analysis, because the required group by is cumbersome ánd expensive.

A table showing time series data according to 'observations' layout, i.e. sorted first by key, next by timestamp and pointing then to value
Time service ‘Observations’ layout

First alternative. Spark TS builds on top of Spark to add a set of abstractions for manipulating time series data. But this project is a risk as being immature and maintained by 1 developer.

2nd alternative. HuoHua. Right now, it is only a concept laid out in this presentation. HuoHua is Chinese for ‘spark’ 🙂 The authors are a team of 8 engineers at TwoSigma (ed. 1 of the 1st customers of BigBoards).

Anyways, time series does not match well with Spark’s typical data model. It cause too much memory pressure and shuffle issues.

Apache Ignite to the rescue!

Apache Ignite In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.

A jigsaw puzzle containing all Apache Ignite's components: data grid, compute grid, service grid, streaming, hadoop accelartion, advanced clustering, file system, messaging, events and data structures
The Apache Ignite component overview

If you look at the picture above, you can see that Apache Ignite is a lot. Mathias and his team specifically used the Data Grid and Compute Grid components to build their proof of concept.

An image says more than a thousand words …

Apache Ignite's in memory data grid, stores an array of keys and values across the memory multiple nodes to make the data high-available, using a back-end database as write-through and read-through persistence engine
Apache Ignite’s in-memory data grid
Apache Ignite's compute grid distributes the compute workload across it's various nodes, each calculating an intermediate result, but combined delivering the solution in part of the time for a sequential calculation
Apache Ignite’s compute grid

The advantages of Apache Ignite that appealed to the Dataminded team, are:

  • Recognizable Java APIs
  • Computations are simple  Java Callables returning a Future<T>. In an interactive environment that is way more flexible than launching Apache Spark.
  • Data affinity allows Apache Ignite to execute code on the nodes where the data resides. This results in major speed improvements.

The benefits that Apache Ignite brought to Mathias’ problem domain, are:

  • Processing sequences without the loss of context
  • Collocating sequences results in 0 network transfer
  • Analyzing a single sequence is fast and convenient
  • Very high level of control

Here is Mathias’ slidedeck …

PS: Do scroll to the last page …we love Dataminded’s corporate values!!!

[pdfjs-viewer url=http://bigdata.be/wp-content/uploads/2016/11/Apache-Ignite.pdf viewer_width=100% viewer_height=356px fullscreen=false download=false print=false]

The After-party

After the presentations we had a group discussion on what our members want or expect from the community. We continued this discussion over a beer in the bar across the street. Here are some thoughts:

  • Do we want to organize and participate in hackathons to get hands-on experience?
  • Who can present meaningful ideas to hackathon on?
  • Do we want to see more business use cases during the meetups?
  • Other?

That was my 1st BigData.be meetup. I liked it a lot! Engaging talks at a great location/venue. Keep it coming. I’ll plan some time for a beer next time 🙂
Mikhail Shilkov

After the meetup, Tom Bayens proposed a possible topic for a hackathon: build the best investment strategy using historic and real-time stock quote data. As far as Tom in concerned, there seems to be ‘an ideal match between the necessary calculations and technologies that fit BigData.be.

Share your own ideas to grow our community!

On Kafka and Hadoop use cases in Europe – 35th meetup

As Big Data Belgium’s first meetup of 2016, we had 2 interesting topics scheduled: Apache Kafka performance and Hadoop use cases in Europe. So, a big thanks goes to both our speakers, but also to Co.Station BXL to host us!

Kafka, the Big Data message broker

Wannes De Smet – Sizing Servers

Often described as the heart of any scalable Big Data cluster, Apache Kafka has quickly become the message broker for your environment. As a message broker’s task is to reliably move messages from component A to B (and C), doing so in a scalable and reliable way with millions of messages is no small feat.

Wannes presented a short intro of Kafka, followed by a deep dive through the entire process of reliably producing and consuming messages. Oh, and doing all that in a distributed, highly-available, fault-tolerant manner, of course. He walked us through some the architectural requirements and operational intrinsics (configuration, monitoring, …) of using and operating a Kafka cluster, based on experiences learned from moving a complex stack to Kafka in production.

So if you are still shifting CSV files around, take some time to get to learn the ultimate upgrade.

Wannes can not share his slide deck with us, but I’m pretty sure that he’ll be keen to share his slides to you personally, if you provide the Sizing Servers Research Lab with your input on their Big Data performance research.

Hadoop in the real world: stories from across Europe

Tim Marston, Director, Regional Alliances, EMEA – Hortonworks

In the 2nd presentation of the evening, Tim Marston introduced us to HDP. This is Hortonworks flavour of Hadoop. He highlighted its strengths as a fully open-source system, before going into more detail on various use cases which they implemented across Europe. The slides give all the details.

Hortonworks – Hadoop Stories

Thank you all for being there!

May the data be with you!

Big Data and Data Science – 27th meetup

Our 27th meetup as a joint venture with DataScience.be was a huge succes! The goal was to give a thorough introduction of Big Data to the data scientists and business people of both organizations.

In total, 221 participants registered over both communities! Unfortunatley, quite a lot of people did not make it last night. That is probably due to the EU top that was happening  yesterday. (Note that the meetup was held at the VUB in Elsene.) But it was still a huge crowd.

Presentations on Big Data for Data Science

Philippe Van Impe, co-organizer of DataScience.be, gave an overview of last years activities of the DataScience.be community. He focussed specifically on their data for good and hackaton initiatives. In the presentation, he hid a product placement for BigBoards: on one of the pictures from a hackaton, Kris Peeter’s Hex was visible in the foreground. The Hex was used to do social network analysis!

Next, the DataScience.be’s team who have been working on their Médecins Sans Frontières (MSF), presented an overview of their work and results. The team was lead by Edward Vanden Berghe. They received a dataset from MSF on the organisation’s donations. The team screened the dataset for donator segmentations and looked for actionable insights to help MSF improve their revenues.

As 3rd speaker, I gave an introduction to Big Data and what it can mean to organisations, large and small. Finally, I touched on the importance of data science to give meaning to the data.

Daan Gerits took over and got into the details of how to setup a scalable and resilient Big Data architecture.

After the break, Ferdinand Casier en Mathias Verbeke exposed their EluciDATA project which starts in 2015. The goals is to help Belgian companies with data innovation. Any questions or request for participation can be send to info@elucidata.be!

And last but not least, Karim Douïeb explained how they are using Spark for call record details analysis for mobile operators. Really interesting!

The meetup ended at about 21h30 with a Q&A session with all presenters together. Very thoughtful questions were raised by a sharp audience!

Thank you all for participating!!!

Images from the 27th meetup

Strata 2014 – Claim your discount!

This year, Strata conference is going down from 19-21 November 2014 in Barcelona. Next to being a gorgeous city, the conference is another reason to visit for anyone with an interest for data! To give you an idea of what Strata is, I pulled a summary from the StrataConf website.

Moreover we got a discount code! Pull the link and code from the sponsors list on our meetup page!

About the O’Reilly Strata Conference

The best minds in data will gather in Barcelona this November for the O’Reilly Strata Conference to learn, connect, and explore the complex issues and exciting opportunities brought to business by big data, data science, and pervasive computing.

The future belongs to those who understand how to collect and use their data successfully. And that future happens at Strata.

Why You Should Attend

Strata Conference is where big data’s most influential business decision makers, strategists, architects, developers, and analysts gather to shape the future of their businesses and technologies. If you want to tap into the opportunity that big data presents, you want to be at Strata.

In a crowded market place of “Big Data” conferences, Strata has firmly established itself as the place where you go to meet people who think and do data science.

At Strata, you’ll:

  • Be among the first to understand how you can leverage the promise of this huge change, and survive the resulting disruption
  • Find new ways to leverage your data assets across industries and disciplines
  • Learn how to take big data from science project to real business application
  • Discover training, hiring, and career opportunities for data professionals
  • Meet-face-to face with other innovators and thought leaders

Experience Strata

Strata Conference delivers the nuts-and-bolts foundation for building a data-driven business—the latest on the skills, tools, and technologies you need to make data work—alongside the forward-looking insights and ahead-of-the-curve thinking O’Reilly is known for.

There was a palpable sense of excitement in the air. Obviously most of the attendees were already ‘data’ aficionados, but it’s clear that ‘data’ in various forms is on the radar for governments, large corporations, and the developer communities.

At Strata, you’ll find:

  • Three days of inspiring keynotes and intensely practical, information-rich sessions exploring the latest advances, case studies, and best practices
  • A sponsor pavilion with key players and latest technologies
  • A vibrant “hallway track” for attendees, speakers, journalists, and vendors to debate and discuss important issues
  • Plenty of events and opportunities to meet other business leaders, data professionals, designers, and developers

About O’Reilly

O’Reilly is followed by venture capitalists, business analysts, news pundits, tech journalists, and thought leaders because we have a knack for knowing what’s important now and what will be important next—and the ability to articulate the seminal narratives about emerging and game-changing technologies.

We don’t say this to brag. We say it to make a point: we’re not easily hypnotized by hype. We’ve seen the bubbles build and burst. For over three decades, we’ve been tapping into a deep network of alpha geeks and thought leaders to recognize the truly disruptive technologies amidst the fluff. So when we invest in a conference, we’re not just following the hype, we’re committed to creating a community around an issue we believe is transformative.

At O’Reilly, we think big data is not just important. We think it’s a game changer. That’s why we created Strata.

O’Reilly’s conferences forge new ties between industry leaders, raise awareness of technology issues we think are interesting and important, and crystallize the critical issues around emerging technologies. Understanding these emerging technologies—and how they will transform the way we do business—has never been more crucial. If you want to understand the challenges and opportunities wrought by big data, you’ll want to attend Strata.

Spark!

More than 80 people showed up at our last meetup focused on Spark. Because there are more and more signs that Spark will become the successor to Hadoop MapReduce we invited some people who are already using Spark in production.

Andy gave an introduction to functional progamming and Scala in just 45 minutes, which is  definitely not enough for passing all details. His slides can be found here

Excellent meetup. The Scala introduction was so quick that it blew my mind but gave me enough information to follow the rest

(Eric Darchis)

We had Toni Verbeiren who gave an introduction to Spark and demonstrated Spark from the command line. Follow the links to his slides and visualization code.

Very interesting mix of Scala, Spark and Use Case

(Peter Vandenabeele)

Gerard Maas showed us how Spark is used in production at Virdata.com. With a cool demo of their platform in the end. His slides are availabele here: Spark-at-Virdata

It was Sparkling! (Radek O)

I am always amazed by the quality of the BigData.be and ScalaBe presentations. Big up to all of you ! (Frederic)

The presentations were recorded by Parleys.com and to be published in a “bigdata.be” channel. We’ll let you know when they become available over there.

Thanks to Ordina for the location and for providing food and drinks.

See you next time, we are always looking for venues and presenters.

 

Meetup 8: Call for participation

Hello all,

the next meetup is already approaching and we are still missing some interesting topics to discuss.

So if you have read something lately that is worth mentioning, or if you’re in the middle having a breakthrough on an interesting brain teaser, or if you are implementing a wonderful project or just doing anything else relevant to our domain, please take a moment to prep some slides and get a discussion going on our 8th meetup!

Looking forward to hearing from you all!

-BigData.be

The 7th meetup or Waiting for CSI Ixelles

Three weeks ago our litte community on bigdata had their 7th meetup in Brussels. We think it is a good idea to hold our meetups in different cities, since we are the Belgian bigdata community. (If you can host a meetup in your city, please contact us!). Next to the typical  evening traffic chaos and a meeting of all European prime ministers there was a crime scene (some sort of knife fight) next to our meeting place, which caused some of our participants to arrive a bit later, than planned.

Nevertheless did we have a good schedule, which consisted out of two talks with lots of good interaction between the speakers and the audience.

The first talk was about storm a distributed realtime processing framework coming out of twitter. Daan Gerrits gave an introduction into storm and walked us through an example application he had created for this meetup.

The second talk (by me) was about apache giraph a graph processing framework on top of apache hadoop.

If you have been to one of our meetings and you liked it, please spread the word, leave comments here, and consider the “call for papers” for our 8th meetup in July open!

The day after …. and more

So it has been a whlie since we held our first meetup on July 5th, 2011. We had a lively discussion on ideas, wants and won’ts for our young but apparently vibrant community. After some discussion in our group, we prefer to setup our meetups using a rotating schedule over Tuesday, Wednesday and Thursday at an interval of 6-7 weeks.

So, we’ll be calendering our 2nd meetup for Wednesday August 24th, 2011. Keep an eye on our meetup page.

All ideas for  a topic that night are more then welcome!

One day to go …

We have all been anxiously waiting for that special day on which we may kick some life into our community. For those who have no idea what I’m talking about: The Belgian BigData launch event will take place tomorrow in Ghent!

There are a few things we would like to talk about, but most importantly we want your feedback and brilliant idea’s regarding BigData concepts, technologies and the Belgian community.

As you may know, the event will start tomorrow (july 5th) at 6:30 PM in the Atari room of the IBBT Zuiderpoort Office Park (Gaston Crommenlaan, 8 (bus 102), Ghent – map ). The following items are a rough outline of the evening:

  1. Members introduction
  2. Community brainstorm
  3. bbuzz debrief

21 members already confirmed their presence. If you are not one of them and you still want to join the event you can do so on our meetup page. All information about the event can be found there, as well as the list of members which will join us tomorrow.

We are impatiently looking forward to meet you.

See you tomorrow!