Instantly share code, notes, and snippets.

# endolith/DFT_ANN.py Last active Jan 24, 2020

Training neural network to implement discrete Fourier transform (DFT/FFT)

My second neural network experiment (first was FIR filter). DFT output is just a linear combination of inputs, so it should be implementable by a single layer with no activation function. Animation of weights being trained: Red are positive, blue are negative. The black squares (2336 out of 4096) are unused, and could be pruned out to save computation time (if I knew how to do that).

Even with pruning, it would be less efficient than an FFT, so if the FFT output is useful, probably best to do it externally and provide it as separate inputs?

This at least demonstrates that neural networks can figure out frequency content on their own, though, if it's useful to the problem.

The loss goes down for a while but then goes up. I don't know why: """ Train a neural network to implement the discrete Fourier transform """ from keras.models import Sequential from keras.layers import Dense import numpy as np import matplotlib.pyplot as plt N = 32 batch = 10000 # Generate random input data and desired output data sig = np.random.randn(batch, N) + 1j*np.random.randn(batch, N) F = np.fft.fft(sig, axis=-1) # First half of inputs/outputs is real part, second half is imaginary part X = np.hstack([sig.real, sig.imag]) Y = np.hstack([F.real, F.imag]) # Create model with no hidden layers, same number of outputs as inputs. # No bias needed. No activation function, since DFT is linear. model = Sequential([Dense(N*2, input_dim=N*2, use_bias=False)]) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X, Y, epochs=100, batch_size=100) # Confirm that it works data = np.arange(N) def ANN_DFT(x): if len(x) != N: raise ValueError(f'Input must be length {N}') pred = model.predict(np.hstack([x.real, x.imag])[np.newaxis]) result = pred[:N] + 1j*pred[N:] return result ANN = ANN_DFT(data) FFT = np.fft.fft(data) print(f'ANN matches FFT: {np.allclose(ANN, FFT)}') # Heat map of neuron weights plt.imshow(model.get_weights(), vmin=-1, vmax=1, cmap='coolwarm')

### aharchaoumehdi commented Aug 18, 2018

 Have you considered the 2D DFT? I'm wondering if the same network (with higher capacity) would still work.
Owner Author

### endolith commented Aug 18, 2018

 @aharchaoumehdi Yes that would work fine, it would just be a lot of connections and inefficient compared to FFT.
Owner Author

### endolith commented Aug 19, 2018 • edited

 I tested all the optimizers. Some went down and then up, but others just trended downwards and then stopped. Need to learn more about them to understand why.  ### CYHSM commented Feb 8, 2019

 Thanks for this very nice Gist! The increase in loss after around 100 epochs may come from floating point error accumulation. However I am still surprised by the pattern of the weights, any idea why they look like that? I just ran it again for different N and the pattern stays very similar. Owner Author

### endolith commented Feb 9, 2019 • edited

 @CYHSM Floating point error accumulation in... what? The pattern of weights is just what a DFT does, they're sinusoids of different frequency, discretely, evenly spaced around the unit circle. if you look at a single row or column, it's a sinusoid.

### KasperJuunge commented Oct 21, 2019

 This repo is so cool! I cited it in my thesis 🤓 🙏 👏
Owner Author

### endolith commented Oct 21, 2019

 @KasperJuunge I'd like to see that :D

### KasperJuunge commented Oct 22, 2019

 @KasperJuunge I'd like to see that :D I'll send it when it's done 🤘

### decanbay commented Oct 24, 2019

 maybe it helps to someone https://github.com/decanbay/Fourier-ADALINE
Owner Author

### endolith commented Oct 26, 2019

 @CYHSM Here's a side view of some of the rows: 