Skip to content

Instantly share code, notes, and snippets.

@jimhorng
Created April 14, 2018 07:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jimhorng/47cc423f4403da4b0d3db86ef9a192a4 to your computer and use it in GitHub Desktop.
Save jimhorng/47cc423f4403da4b0d3db86ef9a192a4 to your computer and use it in GitHub Desktop.
Please let me know if you or your friends are interested in this position, thanks :)
✔ Company: Digital River https://www.104.com.tw/jobbank/custjob/index.php?r=cust&j=384a426d3c463f6952583a1d1d1d1d5f2443a363189j56
✔ Title: Principal Software Engineer
✔ Mission:
Build End-to-End Large-scale Machine Learning Systems for improving value proposition for E-commerce. e.g. Increase payment success rate, Fraud detection, Resource utilizations
✔ Package: Negotiable, can up to 2-2.5M
✔ Requirement:
Machine Learning
* Machine learning domain knowledge—bias-variance tradeoff, exploration/exploitation—and understanding of various model families, including neural net, decision trees, bayesian models, instance-based learning, association learning, and deep learning algorithms. Hands on experience on adapting common families of models, feature engineering, feature selection and other practical machine learning issues, such as overfitting.
* Experience using machine learning libraries or platforms, including :, Caffe, Theanos, Scikit-Learn,or ML Lib for production or commercial products
* Experience with building end-to-end machine learning systems
* Track record of diving into data to discover hidden patterns and solving operational problems with data science
Software Engineering
* Solid engineering and coding skills. Ability to write high performance production quality code. Expertise in one or more object-oriented languages,including Python, Go, Java, or C++, and an eagerness to learn more
* Experience with building scalable production services
Data Engineering
* Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Kafka, Hadoop MapReduce, Hive, Storm, Spark, and Zookeeper
* Experience in stream processing—Storm, Spark, Flink etc.— and graph processing technologies.
* Experience with using data visualization tools (e.g. Tableau, Shiny, d3.js)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment