Skip to content

Instantly share code, notes, and snippets.

View mstfldmr's full-sized avatar

Mustafa Aldemir mstfldmr

View GitHub Profile
@mstfldmr
mstfldmr / gist:f6594b2337e3633673e5
Created March 19, 2015 08:51
Example of Volley GET and POST requests with parameters and headers
RequestQueue queue = Volley.newRequestQueue(context);
//for POST requests, only the following line should be changed to
StringRequest sr = new StringRequest(Request.Method.GET, "http://headers.jsontest.com/",
new Response.Listener<String>() {
@Override
public void onResponse(String response) {
Log.e("HttpClient", "success! response: " + response.toString());
}
},
@mstfldmr
mstfldmr / gist:1bcfd179bfd3615b45a3
Last active August 29, 2015 14:18
Enable Volley Logging
#If you want verbose Log from the volley library you have to use adb
adb -s DEVICE_ID shell setprop log.tag.Volley VERBOSE
#you can get your DEVICE_ID by
adb devices
#If you want to persist this setting use
adb -s DEVICE_ID shell setprop persist.log.tag.Volley VERBOSE
@mstfldmr
mstfldmr / gist:fc4fa436f2e553b10865
Last active March 31, 2022 07:15
Calling Fragment method from Activity / Activity method from Fragment
/*
* From fragment to activty:
*/
((YourActivityClassName)getActivity()).yourPublicMethod();
/*
* From activity to fragment:
*/
FragmentManager fm = getSupportFragmentManager();
//if you added fragment via layout xml
@mstfldmr
mstfldmr / CameraPreview.java
Last active August 29, 2015 14:22
Get Display Size in Android
package net.aldemir.myapp.camera;
import android.content.Context;
import android.content.res.Configuration;
import android.graphics.Point;
import android.hardware.Camera;
import android.os.Handler;
import android.util.AttributeSet;
import android.util.Log;
import android.view.Display;
This sample demonstrates how to handle events when a window is focused or it loses focus.
Bu örnek bir bir pencereye odaklanıldığında veya başka pencere, uygulama, sekmeye odaklanıldığında oluşan olayların kullanımını gösterir.
from captcha.image import ImageCaptcha
from scipy import misc
from matplotlib import pyplot as plt
import numpy as np
capgen = ImageCaptcha()
x = np.random.randint(0,99999)
im_bytes=capgen.generate(str(x))
from matplotlib import pyplot as plt
import cv2
img = cv2.imread('/Users/mustafa/test.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
plt.title('my picture')
plt.show()
# MXNet uses channels_first data format while Tensorflow uses channels_last data format.
img.shape
# (3, 224, 224)
x = np.moveaxis(img, 0, 2)
x.shape
# (224, 224, 3)
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten, Input
from keras.layers import Conv2D, MaxPooling2D, UpSampling2D
import matplotlib.pyplot as plt
from keras import backend as K
import numpy as np
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
# edit jupyter_notebook_config.py
# https://jupyter-notebook.readthedocs.io/en/latest/config.html
c.NotebookApp.allow_origin = '*'
c.NotebookApp.ip = '0.0.0.0' #