As our 8th meetup turned out into a hackaton on Twitter’s Storm, we devised a small presentation on what our use case was going to be: based on a person’s name, we fetch various social network profile information and put them next to each other to highlight the differences.
the next meetup is already approaching and we are still missing some interesting topics to discuss.
So if you have read something lately that is worth mentioning, or if you’re in the middle having a breakthrough on an interesting brain teaser, or if you are implementing a wonderful project or just doing anything else relevant to our domain, please take a moment to prep some slides and get a discussion going on our 8th meetup!
Looking forward to hearing from you all!
Three weeks ago our litte community on bigdata had their 7th meetup in Brussels. We think it is a good idea to hold our meetups in different cities, since we are the Belgian bigdata community. (If you can host a meetup in your city, please contact us!). Next to the typical evening traffic chaos and a meeting of all European prime ministers there was a crime scene (some sort of knife fight) next to our meeting place, which caused some of our participants to arrive a bit later, than planned.
Nevertheless did we have a good schedule, which consisted out of two talks with lots of good interaction between the speakers and the audience.
The first talk was about storm a distributed realtime processing framework coming out of twitter. Daan Gerrits gave an introduction into storm and walked us through an example application he had created for this meetup.
If you have been to one of our meetings and you liked it, please spread the word, leave comments here, and consider the “call for papers” for our 8th meetup in July open!
Interesting discussions throughout the evening lead us to define two paralel tracks. On the one hand, we will try to come up with a semi-structured model for real estate data. On the other hand, we will attempt to apply data analytics on real estate data using algorithms provided by the Apache Mahout community.
Besides being of interest to the bigdata.be community, we reasoned that a semi-structured data model would support integration of real estate data with orthogonal information derived from e.g. OpenStreetMap or OpenBelgium. This will enable us to enrich the existing data with things like
- distance to n cities,
- a ‘green index’ that correlates how far a real estate property is located from a nearby forest, or
- a ‘density index’ that ties in with the number of houses that are for sale at the moment in a radius of 1, 2, 4, 8 or 16 km .
We thought of using HBase or Cassandra as datastores to address this task, but we will only decide this during follow-up meetings. Remembering the interests poll from the first bigdata.be meetup, there will hopefully be quite a few members from the bigdata.be community to elaborate this track of the real estate project.
The second track of the real estate project on the other hand aims to attract social interest by producing insights that are more relevant to a wider audience then just our bigdata.be community. As such we touched three topics for data analytics we could elaborate depending on their feasibility.
- Prediction of the price and time of the sale
Based on archives from real estate companies, we will evaluate how well the price and time of the sale of a house can be predicted. For the suspicious readers, have a look a the Zestimate® price from the zillow.com real estate agency from the U.S..
- Text mining on free text descriptions
It’s obvious that a plastic door correlates to a cheap house, while a granite kitchen correlates to expensive houses. But what other vocabulary-based associations can we derive by performing text mining analysis on the free text fields specified by the seller? (cfr. Freakonomics chapter 2)
- Recommendation engine for similar real estate properties
Finally, by analyzing traffic logs from a real estate website, we should be able to build a recommendation engine that guides the visitor’s to related houses on sale. Think of how you search Amazon.co.uk for a Canon 550D, and you will surely see the camera bag as well.
Keep your eyes on the bigdata.be meetup site and, if interested, join the next events on the real estate project!
On Wednesday August 24th 2011, we had our second meetup. As some people cancelled at the last moment, the crowd was not as large as for our first meetup: now, there were just 6 of us.
This meetup was the first time we organised our get-together as an open discussion. Davy Suvee came up with that idea and apparently everybody present enjoyed this format very much. This article is a synopsis of the topics discussed.
On the look for a Big Data project
One of the major demands from our community members is to work together on some specific Big Data project to gain hands on experience. In this respect the Big Data Wars thread was started on our groups page. However, instead of organizing a true public challenge, it would be easier and more instructive to just participate on a challenge as a team or to look for a specific project that we can implement ourselves. They are outlined underneath.
We decided to create a bitbucket account where we can define, plan, work and code on these projects.
Wikipedia’s Participation Challenge
A few days after Daan Gerits launched the BigData Wars idea on the group page, Nathan Bijnens referred to the Wikipedia’s Participation Challenge as a good fit for a BigData.be project. You can read the full description on the kaggle.com page, but the idea is to build a predictive model that allows the Wikimedia Foundation to understand what factors determine editing behaviour and to forecast long term trends in the number of edits to be expected on wikipedia.org. A random sample from the English Wikipedia dataset from the period January 2001 – August 2010 serves as training dataset for the predictive model.
Appealing to this challenge is that it is a very specific problem description, encompassing quite a large fraction of the big data complexity:
- data availability,
- datastore modelling,
- defining secondary data,
- deriving sampled test data,
- building predictive models using various approaches, technologies and algorithms.
However, as the deadline for the project is very close, i.e. Tuesday 20 September 2011, we probably won’t be truly participating, but we can still use the problem definition.
So, if you are interested in participating in this topic, contact us or join in in the discussion or the repository:
Two more proposals were made as project topics:
- BioMed: As a computational biology researcher at Ghent University, Kenny Helsens proposed to see if he could come up with some suitable project definition in the area of bioinformatics, maybe genome- or proteome related.
- Immo and GIS: Another interesting area for a well-suited project might be the combination of historical data made available by some immo website with GIS related data e.g. from OpenStreetMap. A number of interesting problems can be derived, requiring e.g. predictive models. We’ll be contacting some immo websites on this matter.
As these projects are still in their incubation stage, we haven’t yet created any specific web areas for them. However, these might follow very soon, so keep checking back!
Daan Gerits has been working on some big data and noSQL projects, and was wondering if anybody has experience with bringing Hadoop and/or MR to the realtime playing field, instead of keeping it strictly for batch processing. The basic difference being that you would be able to feed the data crunching algorithms incrementally or by streaming data into the system and have the algorithms merge these into the already (partially) existing result sets.
During a presentation at the SAI on 7 April 2011, Steven Noels and Wim Van Leuven also pointed out that any big data processing system needs a combination of a batch layer with a speed layer to achieve at least eventual accuracy. The speed layer architectually being the most challenging. However, if we could combine realtime with existing MR algorithms, …
Big Data Technology poll
As our meetup group was rather small, we did not redo the technology poll. However, we were wondering if some suitable tool exist to automate the poll via the bigdata.be website or other electronic means, like Google docs. So if you have any good idea in this area, please let us know!
See y’all at the next meetup!
So it has been a whlie since we held our first meetup on July 5th, 2011. We had a lively discussion on ideas, wants and won’ts for our young but apparently vibrant community. After some discussion in our group, we prefer to setup our meetups using a rotating schedule over Tuesday, Wednesday and Thursday at an interval of 6-7 weeks.
So, we’ll be calendering our 2nd meetup for Wednesday August 24th, 2011. Keep an eye on our meetup page.
All ideas for a topic that night are more then welcome!
We have all been anxiously waiting for that special day on which we may kick some life into our community. For those who have no idea what I’m talking about: The Belgian BigData launch event will take place tomorrow in Ghent!
There are a few things we would like to talk about, but most importantly we want your feedback and brilliant idea’s regarding BigData concepts, technologies and the Belgian community.
As you may know, the event will start tomorrow (july 5th) at 6:30 PM in the Atari room of the IBBT Zuiderpoort Office Park (Gaston Crommenlaan, 8 (bus 102), Ghent – map ). The following items are a rough outline of the evening:
- Members introduction
- Community brainstorm
- bbuzz debrief
21 members already confirmed their presence. If you are not one of them and you still want to join the event you can do so on our meetup page. All information about the event can be found there, as well as the list of members which will join us tomorrow.
We are impatiently looking forward to meet you.
See you tomorrow!