In this role you as a Big Data Engineer, you will be joining their Data team where your task will be to drive through change and lead their journey from on-premise to the Cloud with their Data technology solutions.
You will be a key part of the firm's strategy to take advantage of Cloud based data solutions and also consolidate the various disparate data hubs around the organisation into a singular organisation data solution which will improve the firm's storage and utilisation of data.
You will be responsible for designing and building a data platform and leveraging big data solutions to deliver insights across the company as well as developing solutions to retrieve information from various sources, build and optimize distributed processing solutions along with designing data models and maintaining data quality.
As a Big Data Engineer (Cloud Solutions) you will be a technical contributor with hands-on knowledge of all phases in building large-scale cloud based distributed data processing systems and applications, where your responsibilities will be to:
Take part in designing and leading the implementation of Cloud data architecture strategy
Lead development of high-availability, and easily scalable data platform to be used organisation wide
Lead the design, implementation, and continuous delivery of pipelines using distributed Azure based big data technologies supporting data processing initiatives across batch and streaming datasets
Work with all stakeholders across their business understand their current data requirements and solutions and evangelise the benefits of the digital cloud-based data solution
Process and manage high-volume real-time live streams of data
Assist in vendor selection and Proof of concept solutions for earmarked technologies, Azure, AWS, Apache Kafka, Apache Spark
Responsible for development using Scala, Python languages and Big Data Frameworks such as Spark, EMR, Kafka, Storm, Jenkins, Jfrog Artifactory and DataBricks
Big Data Engineer – Cloud, Azure/AWS, SQL, NoSQL, Python, Agile - Key Skills Experience Required:
5+ years of experience in Data Platform 4+ Administration/ Engineering/ Architecture
Experience working in a fast-paced Financial Services Trading Environment
Experience working with Relational Databases in particularly using SQL Server 2016 – 2019
Experience in Architecting and building data pipelines for cloud data assets.
Experience working with Big Data tools Apache Spark, Apache Kafka for real-time data
Provide architectural support by building Proof of Concepts & Prototypes
Knowledge of NoSQL Databases such as MongoDb
Substantial Experience with some of the Big Data ecosystems such as Hadoop, Hive, Spark and Presto
Proficiency working with structured, semi-structured and unstructured data sets including social, web logs and real time streaming data feeds
Knowledge of Reporting BI Tools such as PowerBI
Advanced experience with one or more programming languages ideally Python
Experience working with Microsoft Components
Familiar working in an Agile and Scrum Methodology environment
Collaborative individual who excels in working within a team and with business partners identify, develop and deliver innovative data solutions.