Bromley, United Kingdom
Purpose of Role
This is a key role that will be accountable for the development and operations of the Rescue Data Solution to drive maximum value from data for Rescue users and in line with company best practices. You will work as part of a cross-functional agile delivery team, including big data engineers, data scientists, analysts, DevOps, and QA.
You will have the opportunity to work on complex problems, designing high performance solutions that will run on top of our cloud based big data platform.
- Data Solution Development and Delivery
- Own the roadmap for the development of the Rescue data solution, working closely with the Rescue business and data scientist teams to support the creation of innovative advance data analytics solutions.
- Lead solution design, development and support for the Rescue data solution across analytics and BI use cases.
- Select, pilot, and integrate open source and third party tools, frameworks, and applications for data transformation, analysis, machine learning and other data processes.
- Develop data as a service capability, driving more self service by data consumers, enabled by automated data quality controls and clear data lineage in line with our platform standards.
- Lead agile working and project delivery managing budget, scope, risks and issues.
- Lead the development team to work to common standards and best practices, including meeting all regulatory requirements.
- Work as part of the Data Solution Owners team to uphold and evolve common standards and best practices. Collaborate to ensure that our data solutions are complimentary and not duplicative.
- Work closely with Technology teams to ensure performance, support and change delivery meets our needs.
- Own, manage & resolve issues that arise and manage communication to users and leaders.
- Expert-level proficiency in one of the programming languages like Java, Python, R.
- Understanding and experience of AWS cloud storage and computing platform (especially S3, Athena, Redshift, Glacier, EMR, EC2).
- Hands-on experience with stream data processing framework like Apache Spark or Flink.
- Demonstrated experience of working on various data integration tools like Talend, Pentho, Informatica for both ETL as well as ELT processing.
- Working experience of data preparation tools like SAS, Alteryx.
- Experience on any one of BI tools such as Tableau, Qlik, Looker.
- Solid understanding of Enterprise patterns and applying best practices when integrating various inputs and outputs together at scale.
- Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI)
- Understanding of DevOps principles, tools, and the intersection with cloud architecture.
- Experienced delivering projects in an Agile Environment, familiarity with Jira, Git, and Stash.
- Solid background in mathematics, including statistics, time series analysis, optimization, etc
- Must have familiarity with Machine Learning techniques such as Regression, Logistic Regression, Support Vector Machines.