Full Time Minneapolis Tech Job
Are you inspired by innovation, hard work and a passion for data?
If so, this may be the ideal opportunity to leverage your Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.
At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth and an expanded presence at our company headquarters conveniently located in Downtown Minneapolis (COCO).
As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.
In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity.
As a Data Engineer, your responsibilities include:
- Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
- Develop, implement and optimize streaming, data lake, and analytics big data solutions
- Create and execute testing strategies including unit, integration, and full end-to-end tests of data pipelines
- Recommend Kudu, HBase, HDFS, and relational databases based on their strengths
- Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)
- Adapt and learn new technologies in a quickly changing field
- Be creative; evaluate and recommend big data technologies to solve problems and create solutions
- Recommend and implement best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala
- Work on a variety of internal and open source projects and tools
- Previous experience as a Software Engineer, Data Engineer or Data Analytics
- Solid programming experience in Python, Java, Scala, or other statically typed programming language
- Production experience in core Hadoop technologies including HDFS, Hive and YARN
- Hands-on experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
- Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
- Excellent communication skills; previous experience working with internal or external customers
- Strong analytical abilities; ability to translate business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access, and consumption, as well as custom analytics
- 4 year Bachelors Degree in Computer Science or related field
If you would like to speak with me about this send any of the following: summary, resume, links to LinkedIn or GitHub. Heck, I’ll take a good haiku:
Principal & Evangelist
Minnesota Headhunter, LLC