Author - Gautam Goswami

Japan started using Big data analysis

According to Yano research institute LTD, Tokyo Japan, Big data market could reach one trillion yen ($10 billion USD) by 2020 in Japan itself. By utilizing Hadoop on cloud computing, it is possible today to process huge volume of real-time earthquake warning system's data. Seismic data processing is slowly throwing light on accurate earthquake prediction. Besides, in Japan it is helping smart cities to minimize waste and improve efficiency. Oil and Natural gas exploration and production companies now a...

Read more...

Big Data in Hospital Network

Due to unprecedented scale of digitization, Big data is playing a major role in healthcare sector. Now a days in hospital network, data is continuously getting recorded from all the medical instruments in a pediatrics yard. The big data analytic team is now able to help the physicians to spot an infection trends 12 to 24 hours earlier than they may have spotted it. Hadoop framework is making it possible to process big data in terms of about 200...

Read more...

Big Data analysis not limited to IT graduates

Hundreds of Terabytes, Petabytes, Exabytes of data is possessed in fields viz.social media, medical science, space science, finance, Defense, Oil and Natural Gas industries etc. There is an urgent need of the hour to collect and preserve whatever data is being generated, for the betterment of analysis, research and possible new inventions. The main common characteristic of that massive amount of data is that they are unstructured. Hadoop is a cost and time effective option that helps in gaining...

Read more...

Future of Big Data analysis using Hadoop in Medical Science and Health Care Organizations.

Even though medical sciences are capable of diagnose the diseases like Cancer, Alzheimer’s etc, these diseases remain still incurable. Because to find the root cause of these diseases, the medical researchers need to analyze patient's medical records, various supportive information, climatic conditions in which they lived in, across different geographical locations. And these a need a platform where a huge volume of data can be stored and analyzed. Hadoop is a powerful platform that allows us to store huge...

Read more...

Data is everywhere!!!!

The massive explosion of data is taking place after the invention of social media like FaceBook, Twitter, LinkedIn, Emails, Blogs etc. As per report, 24 petabytes data are processed by Google per day. On average, the Like and Share Buttons are viewed across almost 10 million websites daily. Photo uploads are total 300 million per day in FaceBook. How can we process those huge volume of data? Because traditional software don't have the capability to store those huge volume...

Read more...

Opportunities in Big data analysis

Due to invention of Hadoop framework by Apache community, now a days we are capable of processing 100 peta (for example) bytes of data together. And even it could be more. Our present traditional software system are incapable of holding such a huge volume of data which typically we refer as Big Data. By utilizing Hadoop and it's eco system, we can store and process that volume of data in order to find out intricate information. There is a growing...

Read more...

Information analysis using Hadoop

After Kerala's Puttingal Devi Temple fire tragedy, we can visualize sudden data explosion in all digital media. After that tragic incident, huge amount of data are generated in the form of text, voice, photo, video, blogs etc. over internet via social media, news channels, e-news papers and comments, sentiments, various opinions are flooded on whether fire crackers burst should be allowed in devotional places or not. This is a classic example of Big Data where existing traditional softwares are incapable...

Read more...

Big data approach in Banking system

Typically Banking systems are responsible to validate and verify financial transaction data, geo-location data from mobile devices, merchant data, and authorization including submission data. Data from lots of social media channels and Banking’s mainframe data center have a significant challenge to process and deliver final output. Issue:- Legacy systems are incapable of processing the data in when is in motion. Combining all different format of data is together is another challenge like structured, semi- structured and un-structured. Big data Approach:- Big data analytics enables to...

Read more...

Performance of Hadoop Map-Reduce

The performance of Hadoop Map-Reduce job can be increased amicably without investing more on the hardware cost. Simply tuning some parameters according to the cluster specifications, input data size and processing complexities. Here are few general tips to improve Map_reduce job performance - Always we should use compression when writing intermediate data (mapper output) to disk before shuffling - Include combiner in the appropriate position. - LongWritable data type is incorrect as output when range of output values are in Integer range. IntWritable...

Read more...