Skip to content

Instantly share code, notes, and snippets.

View Chitrank-Dixit's full-sized avatar
🎯
Focusing

Chitrank Dixit Chitrank-Dixit

🎯
Focusing
View GitHub Profile
@Chitrank-Dixit
Chitrank-Dixit / app.py
Created September 20, 2013 04:15 — forked from scturtle/app.py
from flask import Flask, request, session, redirect, url_for
import urllib
import requests
app = Flask(__name__)
app.secret_key = 'iwonttellyou'
redirect_uri = 'http://localhost:5000/callback'
client_id = '' # get from https://code.google.com/apis/console
client_secret = ''
@Chitrank-Dixit
Chitrank-Dixit / custom_panel.css
Created November 4, 2013 15:58
Customized Panels that we used in Eventus trending_event.html and profile.html
.well-inverse {
min-height: 20px;
padding: 19px;
margin-bottom: 20px;
background-color: #ffffff;
border: 1px solid #e3e3e3;
-webkit-border-radius: 4px;
-moz-border-radius: 4px;
border-radius: 4px;
/* -webkit-box-shadow: 5px 5px rgba(0, 0, 0, 0.05);
@Chitrank-Dixit
Chitrank-Dixit / getJson.js
Last active December 27, 2015 20:29
get JSON data from any url in knockoutjs
// write this snippet inside the Knockout viewModel
// getting all the comments from the server
$.getJSON("comments.php", function(commentModels) {
var t = $.map(commentModels.comments, function(item) {
return new Comment(item);
});
self.comments(t);
});
@Chitrank-Dixit
Chitrank-Dixit / sendJson.js
Created November 9, 2013 11:35
Send the Form Data as JSON in Knockoutjs
// write this under Knockout viewModel
// This is an example to send all the users comments to the server
// Basic Example we can fit it for other purposes as well
self.save = function() {
return $.ajax({
url: "comments.php",
contentType: 'application/json',
type: 'POST',
data: JSON.stringify({
@Chitrank-Dixit
Chitrank-Dixit / linkedin_data.php
Created December 26, 2013 14:26
Get Data from Linkedin after authorization
<?php
// Change these
define('API_KEY', 'Your App API KEY' );
define('API_SECRET', 'Your App API SECRET' );
define('REDIRECT_URI', 'http://' . $_SERVER['SERVER_NAME'] . $_SERVER['SCRIPT_NAME']);
define('SCOPE', 'r_fullprofile r_emailaddress rw_nus' );
// You'll probably use a database
session_name('linkedin');
session_start();
@Chitrank-Dixit
Chitrank-Dixit / edit_date.py
Created December 29, 2013 04:42
Converting a Date from dd/mm/yyyy format to yyyy/mm/dd format in python a simple script The date are in the file 'testdate.txt' in dd/mm/yyyy format and after conversion look at convDate list
# from dd/mm/yyyy to yyyy/mm/dd format of dates
f = open('testdate.txt')
i = f.read()
print i
date = i.split("\n")
print date
convDate = []
for adate in date:
if '/' in adate:
@Chitrank-Dixit
Chitrank-Dixit / lexical_analyser.py
Last active October 7, 2022 16:40
The following Python Program takes the C program and Perform Lexical analysis over a simple C program (Very Buggy Program need to fix more instances)
# The Following Program would work as Lexical Analyser
#
# Write a C/C++ program which reads a program written
# in any programming language (say C/C++/Java) and then perform
# lexical analysis. The output of program should contain the
# tokens i.e. classification as identifier, special symbol, delimiter,
# operator, keyword or string. It should also display the number of
# identifiers, special symbol, delimiter, operator, keyword, strings
# and statements
@Chitrank-Dixit
Chitrank-Dixit / extract_mail.py
Created February 2, 2014 21:21
Extract Email from the pages , text files etc using Python re module (Regular Expressions)
# this is the correct program
import re
import urllib2
# get_next_target() takes a page and checks for the positions of the links
def get_next_target(page):
match=re.findall(r'[\w.-]+@[\w.-]+',page)
if match:
return match
@Chitrank-Dixit
Chitrank-Dixit / web_crawler.py
Last active November 27, 2016 09:46
simple web crawler to crawl through pages using urllib2 python module
# Write a web crawler
'''
A crawler is a program that starts with a url on the web (ex: http://python.org), fetches the web-page corresponding to that url, and parses all the links on that page into a repository of links. Next, it fetches the contents of any of the url from the repository just created, parses the links from this new content into the repository and continues this process for all links in the repository until stopped or after a given number of links are fetched.
'''
# urllib2 for downloading web pages
import urllib2
# get_next_target() takes a page and checks for the positions of the links it finds from '<a href='
#!flask/bin/python
from flask import Flask, jsonify, abort, request, make_response, url_for
from flask.ext.httpauth import HTTPBasicAuth
app = Flask(__name__, static_url_path = "")
auth = HTTPBasicAuth()
@auth.get_password
def get_password(username):
if username == 'miguel':