Skip to content

Instantly share code, notes, and snippets.

@patrick-samy
Last active April 29, 2024 15:48
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save patrick-samy/cf8470272d1ff23dff4e2b562b940ef5 to your computer and use it in GitHub Desktop.
Save patrick-samy/cf8470272d1ff23dff4e2b562b940ef5 to your computer and use it in GitHub Desktop.
Split large audio file and transcribe it using the Whisper API from OpenAI
import os
import sys
import openai
import os.path
from dotenv import load_dotenv
from pydub import AudioSegment
load_dotenv()
openai.api_key = os.getenv('OPENAI_API_KEY')
audio = AudioSegment.from_mp3(sys.argv[1])
segment_length = 25 * 60
duration = audio.duration_seconds
print('Segment length: %d seconds' % segment_length)
print('Duration: %d seconds' % duration)
segment_filename = os.path.basename(sys.argv[1])
segment_filename = os.path.splitext(segment_filename)[0]
number_of_segments = int(duration / segment_length)
segment_start = 0
segment_end = segment_length * 1000
enumerate = 1
prompt = ""
for i in range(number_of_segments):
sound_export = audio[segment_start:segment_end]
exported_file = '/tmp/' + segment_filename + '-' + str(enumerate) + '.mp3'
sound_export.export(exported_file, format="mp3")
print('Exported segment %d of %d' % (enumerate, number_of_segments))
f = open(exported_file, "rb")
data = openai.Audio.transcribe("whisper-1", f, prompt=prompt)
f.close()
print('Transcribed segment %d of %d' % (enumerate, number_of_segments))
f = open(os.path.join('transcripts', segment_filename + '.txt'), "a")
f.write(data.text)
f.close()
prompt += data.text
segment_start += segment_length * 1000
segment_end += segment_length * 1000
enumerate += 1
@ceinem
Copy link

ceinem commented Apr 29, 2024

Hi, thanks for sharing this code. Just as a warning, in line 21 you use int() to find the number of segments. This will lead to dropped endings, as it rounds down. e.g. you have a length of 3200sec, your code will determine it's 2 segments of 1500sec and drop the last 200sec. math.ceil() would be the correct function.
Also I believe your openAI API is not quite up to date anymore, I had to adjust it to:

from openai import OpenAI
client = OpenAI()
data = client.audio.transcriptions.create(model="whisper-1", file=f)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment