Professional Documents
Culture Documents
Email: sravankumar.hadoop09@gmail.com
Hadoop Developer
Contact:510-509-6104
Professional Summary
Having Sound Knowledge in Oracle AS 10g,JBoss4.3, Apache Tomcat 6x, Apache Axis
2, WebLogic 9x/8x.
Extensive Experience in developing custom widgets using Google Web Toolkit (GWT)
and Client-Server Communication.
Very good understanding of xml technologies SAX, DOM, W3C XML Schema, JAXB 2.0
and XML Beans.
Expertise in developing database application using Oracle, MySQL, SQL Server 2005.
Experience in Agile development methods like Iterations, scrum meetings, Test Driven
Development, Pair programming, code refactoring.
Sound knowledge in Version One agile project management and tracking tool
Extensive experience in all phase of the project life cycle including conceptual,
functional and technical design as well as application programming
Technical Skills
Hadoop Ecosystems
Data Discovery Applications
HDFS, MapReduce, YARN, Pig, Hive, Hbase, Sqoop, Flume, Oozie, zookeeper,
Spark, Scala, Kafka and storm
Elastic Search and Kibana
NoSQL Databases
Hadoop Distributions
Programming Languages
Java Technologies
Web Servers
Apache Tomcat
Web Services
Databases
Development Tools
Scripting Languages
Professional Experience
Hadoop Developer
Toyota, Georgetown, KY
Roles and Responsibilities
Apr2015-Present
Developed the Apache Storm (Topologies), Kafka, and HDFS integration project to do a
real time data analyses
Designed and developed the Apache Storm topologies for Inbound and outbound data
for real time ETL to find the latest trends and keywords
Developed the components RabbitMQ to Flume, Kafaka to Flume, Storm-to-Flume,
RabbitMQ-to-Kafka, RabbitMQ-to-RabbitMQ components
Designed the Apache Avro schema for the data serialization for the HDFS.
Developed the Kafka source for the Apache Flume to load the bulk data (100 millions)
from the files to HDFS.
Migrated the HBase data from the different clusters
Used the Hive to analyse the partitioned data
Involved in the design & development of Customer and Provider portals
Developed the GWT web components (High charts) for Portal dashboard
Developed the SOA services and client binding for the GWT.
Creating the REST API for the time series using Drop wizard framework
Environment:
Apache Storm 0.9, Hadoop YARN, HBase, Apache Kafka, Apache Flume, Drop wizard ,HIVE,
SOA,
GWT, JDK 1.5, RabbitMQ 3.5.3, MongoDB, Apache Avro, Proto Buffer, Hortonworks
Data platform, Apache Ambari, Influx DB, Oozie workflow, Zookeeper
Hadoop Developer
General Motors, Detroit, MI
July2014-Mar2015
Written multiple java MapReduce jobs for data cleaning and preprocessing
Experienced in defining job flows using Oozie
Experienced in managing and reviewing Hadoop log files
Load and transform large sets of structured, semi structured and unstructured data
Responsible to manage data coming from different sources and application
Supported Map Reduce Programs those are running on the cluster
Involved in loading data from UNIX file system to HDFS.
Involved in designing schema, writing CQLs and loading data using Cassandra
Good experience with CQL Data manipulation commands and CQL clauses
Worked with CQL collections
Installed and configured Hive and also written Hive UDFs.
Involved and experienced with Datastax
Involved in creating Hive tables, loading with data and writing hive queries which will
run internally in map reduce way
Developed Map Reduce jobs to automate transfer of data from/to HBase
Assisted with the addition of Hadoop processing to the IT infrastructure
(Near) Real time search.
Working in creation of indexes and increasing the search results very faster
Used RESTful JAVA APIs and Native APIs while working with Elastic Search
Used flume to collect the entire web log from the online ad-servers and push into
HDFS
Implemented Map/Reduce job and execute the MapReduce job to process the log data
from the ad-servers
Wrote efficient map reduce code to aggregate the log data from the Ad-server
Used Hive to analyse the partitioned and bucketed data and compute various metrics
for reporting
Working knowledge in writing PIG's Load and Store functions
Environment: Hortonworks, MapReduce, HDFS, Hive, Pig, Flume, Oozie, Cassandra, Java
1.5, Elastic Search and Kibana
Hadoop Developer
Jackson, Detroit, MI
Jan2013-Apr2014
Roles and Responsibilities
Gathered the business requirements from the Business Partners and Subject Matter
Experts
Imported data using Sqoop to load data from MySQL to HDFS on regular basis
Written Hive queries for data analysis to meet the business requirements
Utilized Agile Scrum Methodology to help manage and organize a team of 4 developers
with regular code review sessions
Weekly meetings with technical collaborators and active participation in code review
sessions with senior and junior developers
Used JUnit for unit testing and Continuum for integration testing
Environment: Hadoop, MapReduce, HDFS, Hive, PIG, MySQL, Java (jdk1.6), and Junit
Involved in coding of JSP pages for the presentation of data on the View layer in MVC
architecture
Involved in requirements gathering, analysis and development of the Insurance
Portal application
Used J2EE design patterns like Factory Methods, MVC, and Singleton Pattern that
made modules and
code more organized, flexible and readable for future upgrades
Worked with JavaScript to perform client side form validations
Used Struts tag libraries as well as Struts tile framework
Used JDBC to access Database with Oracle thin driver of Type-3 for application
optimization and
efficiency
Used Data Access Object to make application more flexible to future and legacy
databases
Actively involved in tuning SQL queries for better performance
Wrote generic functions to call Oracle stored procedures, triggers, functions
Used JUnit for the testing the application in the testing servers
Providing support for System Integration Testing & User Acceptance Testing
Used Oracle SQL developer for the writing queries or procedures in SQL.
Involved in resolving the issues routed through trouble tickets from production floor
Participated in Technical / Functional Reviews
Involved in Performance Tuning of the application
Used Log4J for extensible logging, debugging and error tracing
Need to discuss with the client and the project manager regarding the new
developments and the errors
Involved in Production Support and Maintenance
Involved in transferring data from MYSQL to HDFS using Sqoop
Environment: JDK, J2EE, UML, Servlet, JSP, JDBC, Struts, XHTML, JavaScript, MVC, XML,
XML, Schema, Tomcat, Eclipse, CDH, Hadoop, HDFS, Pig, MYSQL and MapReduce
Java Developer
US Steel, Pittsburgh, PA
Oct2010 Feb2011
Roles and Responsibilities
Environment: Windows NT 2000/2003, XP, and Windows 7/ 8, Java, UNIX, SQL, SOA, JDBC,
JavaScript, Maven, JUnit, Agile/Scrum Methodology and SVN Version Control
JAVA Developer
Ecolite Technologies, India
Dec2009 Aug2010
Roles and Responsibilities
Environment: Windows NT 2000/2003, XP, and Windows 7/ 8 C, Java, JSP, Servlets, JDBC,
EJB, DOM, XML, SAX
Java Developer
Prokarma , India
Aug2007 Oct2009
Roles and Responsibilities
Environment: Rational Application Developer 6.0, Rational Rose, Java, J2ee, JDBC, EJB,
JSP, EL, JSTL, JUNIT, XML, SOAP, WSDL, SOA.
References