Skip to content

Instantly share code, notes, and snippets.

@risha700
Last active March 13, 2024 23:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save risha700/0a4453b6bd1df078940233b0aa80006c to your computer and use it in GitHub Desktop.
Save risha700/0a4453b6bd1df078940233b0aa80006c to your computer and use it in GitHub Desktop.

Q1.

// Q1

// Toy class
class Toy {
    constructor(id, name, price, category) {
        this.id = id;
        this.name = name;
        this.price = price;
        this.category = category;
    }
}

// Supplier class 
class Supplier {
    constructor(name, apiUrl, format) {
        this.name = name;
        this.apiUrl = apiUrl;
        this.format = format;
    }

        async fetchToys() {
            // Call the 3rd party supplier API based on its format
            let toysData;
            if (this.format === 'XML') {
                toysData = await this.fetchFromSOAP();
            } else if (this.format === 'JSON') {
                toysData = await this.fetchFromJSON();
            } else if (this.format === 'CSV') {
                toysData = await this.fetchFromCSV();
            }
        
        // Normalize and standardize the data to our own format (JSON)
        const normalizedToys = this.normalizeData(toysData);

        return normalizedToys;
    }
    // Weak approach but cleaner if only we are dealing with three types, then separation of concern is clearer.        
    async fetchFromSOAP() {
        //TODO: Implement SOAP API
    }

    async fetchFromJSON() {
        //TODO: Implement JSON API
    }

    async fetchFromCSV() {
        //TODO: Implement CSV API call logic
    }

    normalizeData(data) {
        // Normalize fields like id, name, price, category, etc.
        return normalizedData;
    }
}

// Backend API middleware to handle requests like express
class BackendAPI {
    constructor() {
        // Initialize suppliers
        this.suppliers = [
            new Supplier('Supplier1', 'http://kidsworld.com', 'XML'),
            new Supplier('Supplier2', 'http://toyuniverse.com', 'JSON'),
            new Supplier('Supplier3', 'http://toyuniverse.com', 'CSV')
            
        ];
    }

    async getToysBySupplier(supplierName) {
        // Find the supplier by name
        const supplier = this.suppliers.find(supplier => supplier.name === supplierName);
        if (!supplier) {
            throw new Error('Supplier not found');
        }

        // Fetch toys from the supplier
        const toys = await supplier.fetchToys();

        return toys;
    }
}

// Example usage
const backendAPI = new BackendAPI();
backendAPI.getToysBySupplier('Supplier1')
    .then(toys => {
        console.log('Toys from Supplier1:', toys);
    })
    .catch(error => {
        console.error('Error:', error);
    });


Q 2.

Note : you can? t change anything on ToyUniverse and ERP side, and also you can? t change timeout settings on your server. However you can have your own database or save files on your server if needed


At first glance, to resolve the issue of timeouts when processing webhook notifications from ToyUniverse and parsing order details from their SFTP server, we can implement a caching mechanism combined with a more efficient way of handling incoming notifications. Here's how we can approach this problem:

- Maintain a database or a file system on the server to store parsed order details retrieved from ToyUniverse's SFTP server.

- Implement a queueing system for incoming webhook notifications to ensure that they are processed sequentially and efficiently.

- Instead of downloading and parsing all 100,000 XML files from the SFTP server every time, implement an indexing mechanism to quickly locate the relevant file based on the order ID though a mapping like or hashset between order IDs and file paths or filenames. This mapping can be stored in a database or a structured file on your server

- Only download and parse the specific files identified by the mapping, rather than processing all files in the SFTP folder.

- Monitor system performance and resource utilization regularly to identify bottlenecks and optimize system components accordingly.

Q 3.

Problem domain:

Can you please explain how would you approach this, and how would you avoid or manage data saving conflicts if there are data
submissions from other sources especially offline?

Solution:

Firstly, implementing data collection on offline mode does sound like a prefect candidate for Service Workers (Web workers). Modern JS and Browser API supports it almost on every major platform and it is being used aggressively by google analytics and such.

Busting cache strategy is one of the challenging topics in software development, however SW comes with its solution out of the box, technically speaking, I would resort for uuid or timestamp on every link included inside the frontend, and verify stale content once it can communicate to the server, and for data integrity request should describe for example Machine IP, IMEI along with every data submitted when the client comes back online.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment