OpenSea is the first and largest marketplace for non-fungible tokens, or NFTs. Applications for NFTs include collectibles, gaming items, domain names, digital art, and many other items backed by a blockchain. OpenSea is an open, inclusive web3 platform, where individuals can come to explore NFTs and connect with each other to purchase and sell NFTs. At OpenSea, we’re excited about building a platform that supports a brand new economy based on true digital ownership and are proud to be recognized as Y Combinator’s #3 ranked top private company.
When hiring candidates, we look for signals that a candidate will thrive in our culture, where we default to trust, embrace feedback, grow rapidly, and love our work. We also know how critical it is to celebrate and support our differences. Employing a team rich in diverse thoughts, experiences and opinions enables our employees, our product and our community to flourish. We are dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. To help facilitate this, we support remote, hybrid or onsite work at either New York City, San Francisco or the Silicon Valley for the majority of our opportunities.
Our engineering team at OpenSea is in search of a strong and curious Data Engineer to take charge of our analytics and machine learning pipelines. As a member of our data engineering team, you will collaborate with other engineers, data analysts, data scientists, and product managers, contributing significantly to the growth of one of the most rapidly expanding NFT marketplaces in the Web3 ecosystem.
- Design, build, and maintain data pipelines from end-to-end, ensuring data accuracy, availability, and quality for the Analytics and Data Science teams
- Collaborate closely with Data Scientists to understand data requirements, develop data models, and optimize data pipelines for advanced analytics and machine learning use cases
- Develop and maintain scalable, efficient, and reliable ETL processes, using best practices for data ingestion, storage, and processing
- Work with stakeholders to identify and prioritize analytics requirements, and build out necessary analytics tools and dashboards
- Proactively monitor data pipelines, troubleshoot, and resolve data-related issues
- Contribute to the continuous improvement of data engineering practices, including documentation, code reviews, and knowledge sharing
- 5+ years of experience in data engineeringExperience with big data technologies such as Snowflake, Hadoop, Spark, Airflow, or Flink
- Strong knowledge of AWS services, particularly those related to data storage, processing, and analytics (e.g., S3, Redshift, Glue, EMR, Kinesis, Lambda, and Athena)
- Expert in SQL and proficiency in at least one programming language (Python, Go, Java)
- Familiarity with data warehousing concepts and schema design principles (e.g., Star Schema, Snowflake Schema)
- Strong problem-solving skills, a data-driven mindset, and a passion for working with large, complex datasets
- Excellent communication and collaboration skills, with the ability to work effectively across teams and stakeholders
If you don’t think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box, and we’re looking for someone excited to join the team.
The base salary for this full-time position, which spans across multiple internal levels depending on qualifications, ranges between $160,000 to $305,000 plus benefits & equity.
Pour postuler à cette offre d’emploi veuillez visiter jobs.lever.co.