A few weeks ago, a colleague and I attended the International Conference of Precision Agriculture. During this annual joint conference of the International Society of Precision Agriculture (ISPA) and InfoAg, one theme constantly emerged: “It’s the data (management), stupid”
What we have to say, what you want us to hear.
That’s how our blog works. It’s interactive. Let’s learn together.
In-memory computing is evolving rapidly, providing the high-throughput and low-latency data processing needed to fuel IoT innovation. And, the open source Apache Ignite project is leading the way with a full-featured, in-memory data fabric that includes a data grid, compute grid, service grid, and messaging. But did you know it also provides streaming and complex event processing (CEP) features? With so many other streaming engines available (such as Spark Streaming, Apache Flink, or Apache Apex) why choose Apache Ignite?
Last week, May 23-24, I presented at the In-Memory Computing Summit 2016 in San Francisco, CA. This is an industry-wide event tailored to in-memory computing related technologies and solutions. In-memory computing visionaries, decision makers, experts and developers gathered to discuss current uses of in-memory computing as well as the future of this rapidly evolving market.
One problem with the Big Data paradigm is the lack of Big Data capable software engineers. Why is this? Let's check off the reasons:
- It is hard to reason about parallel computation. Steve Jobs was famous for saying that "nobody knows how to program [multi-core]. I mean two, yeah; four, not really; eight, forget it." Now take those eight cores and multiply it across tens, hundreds, or in some cases, thousands of machines. This is a difficult problem.
- There is a constantly changing ecosystem of tools, languages, and work flows. This leads to confusion among newcomers on which to learn and use, and which are irrelevant.