You are given an integer N. Print the factorial of N, as defined below.
N! = N * (N-1) * (N-2) * ... * 3 * 2 * 1
/* | |
Setup: | |
npm install ws | |
Usage: | |
Create an API key in Rancher and start up with: | |
node socket.js address.of.rancher:8080 access_key secret_key project_id | |
*/ | |
var WebSocket = require('ws'); |
version: '3' | |
services: | |
rancher-server: | |
image: rancher/server | |
container_name: rancher-server | |
ports: | |
- "8080:8080" | |
volumes: | |
- /root/mysql:/var/lib/mysql | |
- /root/mysql-log:/var/log/mysql |
Our north star metric for this year is 1000 placements made through our platform. As such, the roles we individually play might shift fluidly based on the situation. That being said, here are some of the gaps that we need filled at our stage:
Glints whitelabels its platfrom to several client. These are sites that share the same codebase as the main glints site, but has its custom styles and at times components.
An example is IE Singapore and JOS, vs main Glints.
How would you design our architecture to support flexible and extensible configuration/customization for whitelabel sites?
Kushi Tsuru | Mon-Sun 11:30 am - 9 pm | |
---|---|---|
Osakaya Restaurant | Mon-Thu, Sun 11:30 am - 9 pm / Fri-Sat 11:30 am - 9:30 pm | |
The Stinking Rose | Mon-Thu, Sun 11:30 am - 10 pm / Fri-Sat 11:30 am - 11 pm | |
McCormick & Kuleto's | Mon-Thu, Sun 11:30 am - 10 pm / Fri-Sat 11:30 am - 11 pm | |
Mifune Restaurant | Mon-Sun 11 am - 10 pm | |
The Cheesecake Factory | Mon-Thu 11 am - 11 pm / Fri-Sat 11 am - 12:30 am / Sun 10 am - 11 pm | |
New Delhi Indian Restaurant | Mon-Sat 11:30 am - 10 pm / Sun 5:30 pm - 10 pm | |
Iroha Restaurant | Mon-Thu, Sun 11:30 am - 9:30 pm / Fri-Sat 11:30 am - 10 pm | |
Rose Pistola | Mon-Thu 11:30 am - 10 pm / Fri-Sun 11:30 am - 11 pm | |
Alioto's Restaurant | Mon-Sun 11 am - 11 pm |
There are 2 questions in the assessment, which you can answer in any language. For each test, please commit your code to a private repository on GitHub, with a README.md on how to run it. When you’re done, simply add seahyc and yjwong as collaborators to your private repository.
There's no time limit, but do time yourself for every question beginning from the time you start reading it till your final local commit, and let me know how long you took for each question in the README. We recommend that you set aside 1.5 hours for each task in the test.
You can take in the input either via command-line arguments, or simply assume that it's assigned at the top of your script. Either way, let us know how to inject an input. We'll adapt our test cases accordingly. Very importantly, we trust you to write your solutions independent of any external help.
There are 3 questions in this part.
There's no time limit, but do time yourself for every question beginning from the time you start reading it till your final local commit, and let us know how long you took for each question in your README.
Given a large chunk of text, write a program that can identify the most frequently occurring trigram in it. If there are multiple trigrams with the same frequency, then print the one which occurs first. Assume that trigrams are groups of three consecutive words in the same sentence which are separated by nothing but a single space and are case insensitive.
This assignment has 2 parts to it. You may attempt the bonus section, which will definitely add to the final score if well-executed upon.
Implement this part of the task with Apache Spark using Docker. Given a set of keywords, find the top k most similar documents among a set of N documents from the dataset - (click this link to download all-the-news.zip), using Apache Spark and Docker. Your Apache Spark setup should consist of more than 1 slave.
Your submission:
You work at a venture capital firm, HardBank, which is hell-bent on finding the next unicorn startup to invest in. They appointed you to build a tool to search through all the startups for patterns of success and failure. If it works, they’ll also open it up for public access.
Your general partner, Mahayoshi Sonny, dumped you a json file of company profiles (https://ufile.io/gzpnra8msk1s902) that his data team has assiduously compiled over many years.
He’d like you to build an API server, with documentation and a backing relational database that will allow any client to navigate through that sea of data easily, and intuitively. The front-end team will later use that documentation to build the front-end clients.
The choice of the language and server framework is up to your discretion, you just need to explain your decision later. As for database, you can pick any so long as it is a relational database. In the dataset, each company has many attributes, but for this projec