Job InformationJob ContactAbout CompanyApply Now
Career Level: 2 Category: 15-1000 Occupation: 15
|
Skills: Hadoop and Spark Architecture and its working principle
Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data
UNIX shell scripting
Financial reporting ecosystem will be a plus
design and develop optimized Data pipelines for batch and real time data processing
analysis, design, development, testing, and implementation of system applications
Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows
learning and applying programming concepts.
effectively communicate with internal and external business partners.
Financial reporting ecosystem will be a plus
designing and building solutions using Kafka streams or queues
GitHub and leveraging CI/CD pipelines.
NoSQL i.e., HBase, Couchbase, MongoDB Education: 3 Requirement: Role:Big Data Developer Description: Bachelor’s degree in engineering or computer Science or equivalent OR Masters in Computer Applications or equivalent.
5+ years of Cloud software development experience
5+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark)
3-5 years of solid experience with GCP BigQuery
3 years of experience working on GCP Compute Engine
Good overall understanding of Google Cloud Platform and its services
3-5 years Solid experience with Google Cloud Dataproc
Expertise with data structures, data modeling, and software architecture
|