Skip to main content

HDFS (Hadoop), Scikit-Learn & Apache Spark MLlib

On Linux: Ubuntu 14.04.5 LTS, Release: 14.04, trusty.

Apache Hadoop is an open source software framework that enables large data sets to be broken up into blocks, distributed to multiple servers for storage and processing. Hadoop’s strength comes from a server network – known as a Hadoop cluster – that can process data much more quickly than a single machine. The non-profit Apache Software Foundation supports the free open source Hadoop project, but commercial versions have become very common.

The Hadoop Distributed File System (HDFS) is the place in a Hadoop cluster that you store data. Built for data-intensive applications, the HDFS is designed to run on clusters of inexpensive commodity servers. HDFS is optimized for high performance, read intensive operations and resilient to failures in the cluster. It does not prevent failures but is unlikely to lose data, since by default HDFS makes multiple copies of each of its data blocks.

Hadoop does batch processing i.e processing of blocks of data already stored over a period of time. Initially Hadoop's MapReduce technique was the best framework for processing data in batches. Spark is an open-source cluster computing framework for real-time processing. Spark's additional functionality is that it can process data in real time and since it was built on top of Hadoop MapReduce and it extends the MapReduce model to efficiently use more types of computations it is also about 100 times faster than Hadoop MapReduce in batch processing large data sets.

Spark can create distributed datasets from any file stored in the Hadoop distributed filesystem (HDFS) or other storage systems supported by the Hadoop APIs (including your local filesystem, Amazon S3, Cassandra, Hive, HBase, etc.). Spark does not require Hadoop; it simply has support for storage systems implementing the Hadoop APIs. Spark supports text files, SequenceFiles etc and any other Hadoop InputFormat.

More differences and Spark details are here: https://www.edureka.co/blog/spark-tutorial/



Install Hadoop in Stand-Alone Mode on Ubuntu 16.04

Once installed run it as:
/usr/local/hadoop/bin/hadoop


Scikit-Learn ML Examples:
http://scikit-learn.org/stable/auto_examples/index.html#






Spark Examples:

https://spark.apache.org/examples.html

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version ...
      /_/

There was a problem such as the following while running pyspark
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/spark/launcher/Main : Unsupported major.minor version 52.0

Apache Maven and JDK 8 had to be installed. Details here:
https://www.digitalocean.com/community/tutorials/how-to-install-java-with-apt-get-on-ubuntu-16-04

Also another problem to keep in mind from some of the Spark MLib code on the website to add the context and session variables:

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)


Big Data with Apache Spark




HDFS with Spark: https://cbw.sh/spark.html

Setting Up Your Environment - In order to use HDFS and Spark, you first need to configure your environment so that you have access to the required tools. The easiest way to do this is to modify the .bashrc configuration file in your home directory.

Comments

Popular posts from this blog

Google BigQuery & Apache Hive

Google BIGQUERY is a fast, economical and fully-managed enterprise data warehouse for large-scale data analytics. Details of querying your custom table in BigQuery: https://cloud.google.com/bigquery/quickstart-web-ui The Apache Hive ™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage and queried using SQL syntax. Built on top of Apache Hadoop™, Hive provides the following features: Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis. A mechanism to impose structure on a variety of data formats Access to files stored either directly in Apache HDFS™ or in other data storage systems such as Apache HBase™ Query execution via Apache Tez™, Apache Spark™, or MapReduce Procedural language with HPL-SQL Sub-second query retrieval via Hive LLAP, Apache YARN and Apache Slider. More details on getting started: 

IIMB PGSEM SOP Essays.

The IIMB PGSEM application for 2008 had a SOP section which required 5 short essays to be written. Here are the ones I had written: Statement of Purpose How do you see the PGSEM helping you in your goals? (150 words) My taking up the PGSEM course has twin objectives, namely, self-development and learning all aspects of setting up, managing a commercial/social enterprise. Having worked in the software industry for five years, I have closely seen the software development life-cycle. However, there are several aspects of business and the economy that are of interest to me and I find the time ripe to explore these in a formal way, through academics; specifically strategic management of a firm, innovation strategies, and the scope of strategic consulting. Getting ready to usher in acceleration in growth opportunities in my care

DNA newspaper plagiarizes my photographs!

The newspaper DNA (Daily News and Analysis - http://www.dnaindia.com/bangalore ) seems to have involved in not verifying its sources of photographs and having used my photographs (does this amount to plagiarism? I think it does) after it carried some of my pictures in the 'After Hrs' section of its newspaper on 31st January 2009, which I had taken at the IIMB Yamini 2009. It is good that they covered the event but they should have cited/verified the sources of the photographs. In all probability they or their sources just picked up the photos from my blog, with the belief that no one would notice anyways - seems they could not escape as luck would have it, I spotted them in the DNA paper on Saturday. It was early in the morning when as I flipped open the last page of the supplement that I was stunned to see my pics, which I was able to recognize immediately - however there were no credits anywhere in sight! Please check the photos below from the e-paper version on their website