Big Data Developer (archived)
To achieve the best results we are constantly challenging assumptions and establishing new approaches following our company's motto - never settle.
You are a good match if you:
• Have experience with Spark
• Are proficient in a programming language like Java or Python or Scala
• Have knowledge and experience in Unix environment & Shell scripting
• Have RDMS DB background (SQL Server/Oracle and/or DB2) - at least 2 years
• Have experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems and vice-versa.
• Have knowledge of workflow/schedulers like Oozie or any other scheduling tool
You get extra points for:
• Hands-on experience with Talend used in conjunction of Hadoop MapReduce/Spark/Hive.
• Solid understanding of different file formats and data serialization formats such as ProtoBuf, Avro or JSON.
• Experience with Google cloud platform (Google BigQuery)
• Experience in IDE framework like Hue, Jupyter , Zepplin
You will be responsible for:
• Development of data ingestion pipelines
• Contribution to Big Data Team across Poland
Flexible hours and home office
Sports and medical package
Fresh fruit twice a week
And on top of these:
Me&You, no Mr&Mrs
No dress code
X-mas party and picnic
We stick together. It's pretty amazing to see how smoothly new intivers find themselves within our culture. We love to see that.