All CategoriesBig Data Hadoop & Spark

Rolling in Fog Computing to Assay Big Data

Information is power and in the age of corporate surveillance, profiles on every active individual means that the system is slanted in favor of those with the data. Today, data is no longer just something being discussed in boardrooms and laboratories at the highest levels. As IoT continues to expand, more and more physical objects are getting connected wirelessly to transmit and receive data.  Everyday people get out of bed and look at the data collected on their sleep patterns, investigate what they are spending money on through apps and browse through the possession and run stats on their favorite sports teams. Data is now everywhere in our society; meaning that the general population is becoming increasingly clued up on using it. It is not to say that the general population is going to suddenly become data scientists, but it means that the kind of data shared can become more complex as the understanding of it across a population increases.
Honeywell’s latest survey, ‘Data’s Big Impact on Manufacturing: A Study of Executive Opinions’ states that around  46 percent of manufacturers agree that implementing and using data analytics is no longer optional. Similarly, 32 percent of the participants see the potential for Big Data analytics and Industrial Internet of Things (IIoT) to improve performance and increase revenue. Along the same lines, Manufacturers perceive data analytics as a key component of a successful Industrial Internet of Things (IIoT) strategy across their operations. The survey has further proven that data can help organizations build tailor-made profiles that can be used for or against someone in a given situation. A befitting example would be Insurance companies, which historically sold car insurance based on driving records, who now have started using data-driven profiling methods.
download
Nevertheless, as unbelievable as it may sound, there still are a lot many companies that use outdated tactics and technology to collect and analyze data. As a result, these companies miss propitious opportunities to extract the maximum value of their data. It is estimated that a whopping 90 percent of the data that currently exists was created in just the last two years. In 2014 there were 204 million emails every minute. This volume, variety, and velocity of data are unprecedented, its territory uncharted – and its potential mostly untapped. This situation is going to accelerate problems for these companies if not simply pose. All said and done, Big Data is not the problem per se. It is the storing of this data and accessing the relevant ones in a convenient fashion to make informed decisions that become bothersome. For instance, Mobile data, much like IoT sensor data, is being created with increasing rapidity, but the downside of all of this data is that some, or even most of it, isn’t needed to answer data analysis queries. How can a company hone in quickly on which ad is having the most impact while eliminating the “noise” of other social activities happening around it? The answer- Fog Computing. Imbibing Fog Computing enables companies to better understand their data and analytics needs to thin Big Data and only process and analyze the information necessary to the query.
First coined by Cisco in 2013, Fog Computing was used to describe a computer and network framework for IoT applications. The technology has since been delivering organizations with the right information at the right time to people on any device, driving business success. Fog Computing, also known as Edge Computing and Distributive Analytics solve Big Data problems by keeping data closer “to the ground,” so to speak, in local computers and devices, rather than routing everything through a central datacentre in the Cloud.
Fog Computing basically means designing systems where analytics is performed at the point where (or very close to where) the data is collected. Often, this is where the action based on the insights provided by the data is most needed. Rather than designing centralized systems where all the data is sent back to your data warehouse in a raw state, where it has to be cleaned and analyzed before being of any value, why not do everything at the “edge” of the system?
For instance, say you have a Smartphone per member in a family and three laptops in total. Add in a Tablet or two for good measure. Installing a particular software in each of these devices will take ample amount of time not to mention the bandwidth issues. But what if the laptop could download software updates and then share them with the phones and tablets? Instead of using precious and slow bandwidth for each device to individually download the updates from the Cloud, they could utilize the computing power all around us and communicate internally. This is exactly what Fog Computing does.
2016-06-26-1466941803-9206234-424782cloudcomputing
A Fog Computing framework can help prevent unwanted infrastructure scenarios by splitting workloads among local Cloud environments, where different ‘things’ (i.e., sensor-equipped, network-connected devices) quickly transmit data to locally deployed ‘fog’ or ‘edge’ nodes, rather than communicating directly with Clouds. Following this process, a subset of non-time sensitive data is forwarded from the fog nodes to a centralized Cloud or data center for further analysis and action. Another interesting fact about this approach is that one can balance the ratio of computing on the endpoint device, on the edge of the network, or in the data center in whatever way makes the most sense for your product or service and your users.
Fog Computing uses an infrastructure that showcases a cluster of compute, storage, and networking resources delivering sufficient horsepower to deal with the data locally. The cluster that lives on the edge is termed as the Fog layer. Mimicking capabilities within the edge location, while taking advantage of the Cloud for heavy lifting, one could safely say that Fog computing is to IoT what Hybrid Cloud is to enterprise IT. While some experts in the market deem adding Fog Computing to an IoT network to add complexity, there are supporters who view that such a complexity is sometime necessary. Reason being that in certain use cases, Fog Computing solves the inadequacies of cloud-only models, which have serious challenges with latency, network bandwidth, geographic focus, reliability, and security.
Fog Computing in the Offing
With all the hype around Fog, it was inevitable for a consortium to be formed and thus, the OpenFog Consortium was born on November 2015. According to the official website, the mission of the consortium is to drive industry and academic leadership in fog computing architecture, testbed development, and a variety of interoperability and composability deliverables that seamlessly leverage cloud and edge architectures to enable end-to-end IoT scenarios. The founding parents of the Consortium; ARM, Cisco, Dell , Intel, Microsoft, and Princeton University have informed that OpenFog will work with existing industry bodies such as IIC, OCF, OpenFV, and MEC. As of April 12, 2016, GE Digital, Schneider Electric, and IEEE are a part of the consortium’s board of directors. Moreover, OpenFog found its first member in Asia, SAKURA Internet, which is chartered to drive the momentum in Asia Pacific as Japan.
The Consortium’s parents are quite thrilled at the exciting possibilities that Fog Computing can unravel. One such use case is FaaS- Fog -as-a-Service, where a Fog service provider, which could be a municipality, telecom network operator, or webscale company, deploys a network of fog nodes to blanket a regional service area.
The leading vendors in the market further project that just as Cloud has created new business models, growth, and industries, Fog can bring together new vendors, new industries and new businesses models from the industry to work together with academia to address the challenges and solve real business problems. Experts believe that Fog computing will provide ample opportunities for creating new applications and services that cannot be easily supported by the current host-based and cloud-based application platforms. For example, new Fog-based security services will be able to help address many challenges we are facing in helping to secure the Internet of Things. A prime use case would be where GE Digital has started to deploy more processing power into sensors on trains because even millisecond delays in relaying sensor data back to the datacentre in hopes of requesting train routing instructions can have catastrophic effects.
big
Observing this trend, it is safe to say that Fog Computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Who knows maybe one-day sensors that only log data might soon become a thing of the past.
Wrapping up; Big Data is not going anywhere soon. Businesses of all sizes will be using some form of data analytics to impact their business in the next five years, and the organizations who will survive in today’s economy will be infused with digital services. By 2020, at least a third of all data will pass through the Cloud. Therefore, enterprises need to ensure that their IT infrastructure is equipped to handle the era of Big Data and I believe that Fog Computing is the way to do that.
Hope this blog section helped you learn about Fog computing. Keep visiting our site www.acadgild.com for more update on Bigdata and other technologies.
 

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Close