Sound Prediction
Solution for submission 146452
A detailed solution for submission 146452 submitted for challenge Sound Prediction
Starter Code for Speech Recognition
Here we go, this is the last challange of Blitz 9.Now in this challange, we are not going to use any text based dataset, but we are going to predict numbers said from a sound. While, we will be learning tons to new things in this final challange, this final challange is more about putting what we learned from the last 4 challanges into practical real-world application such a Speech Recognition.
What we are going to Learn¶
- Introduction to sound based datasets.
- Using Mozilla DeepSpeech to train, evaluate and test our model.
Install packages 🗃¶
!pip install aicrowd-cli
!mkdir assets
Collecting aicrowd-cli Downloading https://files.pythonhosted.org/packages/1f/57/59b5a00c6e90c9cc028b3da9dff90e242ad2847e735b1a0e81a21c616e27/aicrowd_cli-0.1.7-py3-none-any.whl (49kB) |████████████████████████████████| 51kB 4.7MB/s Collecting tqdm<5,>=4.56.0 Downloading https://files.pythonhosted.org/packages/42/d7/f357d98e9b50346bcb6095fe3ad205d8db3174eb5edb03edfe7c4099576d/tqdm-4.61.0-py2.py3-none-any.whl (75kB) |████████████████████████████████| 81kB 7.2MB/s Collecting requests-toolbelt<1,>=0.9.1 Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB) |████████████████████████████████| 61kB 7.0MB/s Collecting requests<3,>=2.25.1 Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB) |████████████████████████████████| 61kB 7.3MB/s Collecting gitpython<4,>=3.1.12 Downloading https://files.pythonhosted.org/packages/27/da/6f6224fdfc47dab57881fe20c0d1bc3122be290198ba0bf26a953a045d92/GitPython-3.1.17-py3-none-any.whl (166kB) |████████████████████████████████| 174kB 19.0MB/s Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2) Collecting rich<11,>=10.0.0 Downloading https://files.pythonhosted.org/packages/a8/32/eb8aadb1ed791081e5c773bd1dfa15f1a71788fbeda37b12f837f2b1999b/rich-10.3.0-py3-none-any.whl (205kB) |████████████████████████████████| 215kB 20.7MB/s Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2) Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2020.12.5) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10) Collecting gitdb<5,>=4.0.1 Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB) |████████████████████████████████| 71kB 9.1MB/s Requirement already satisfied: typing-extensions>=3.7.4.0; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from gitpython<4,>=3.1.12->aicrowd-cli) (3.7.4.3) Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1) Collecting colorama<0.5.0,>=0.4.0 Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl Collecting commonmark<0.10.0,>=0.9.0 Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB) |████████████████████████████████| 51kB 5.5MB/s Collecting smmap<5,>=3.0.1 Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible. ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible. Installing collected packages: tqdm, requests, requests-toolbelt, smmap, gitdb, gitpython, colorama, commonmark, rich, aicrowd-cli Found existing installation: tqdm 4.41.1 Uninstalling tqdm-4.41.1: Successfully uninstalled tqdm-4.41.1 Found existing installation: requests 2.23.0 Uninstalling requests-2.23.0: Successfully uninstalled requests-2.23.0 Successfully installed aicrowd-cli-0.1.7 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.17 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.3.0 smmap-4.0.0 tqdm-4.61.0
Installing DeepSpeech¶
Now, all what we are doing in the below 4 cells is to setting up environment for Deepspeech, is a really trick part to do in this whole notebook
!git clone --branch v0.9.3 https://github.com/mozilla/DeepSpeech
Cloning into 'DeepSpeech'... remote: Enumerating objects: 23874, done. remote: Counting objects: 100% (411/411), done. remote: Compressing objects: 100% (185/185), done. remote: Total 23874 (delta 231), reused 358 (delta 213), pack-reused 23463 Receiving objects: 100% (23874/23874), 49.48 MiB | 4.75 MiB/s, done. Resolving deltas: 100% (16362/16362), done. Note: checking out 'f2e9c85880dff94115ab510cde9ca4af7ee51c19'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name>
Install DeepSpeech Dependencies¶
All the steps taken for this section are from Train IITM
%cd /content/
!sudo apt-get install python3-venv
!sudo apt-get install python3-dev
!pip install --upgrade pip
!sudo apt-get install sox
!sudo apt-get install sox libsox-fmt-mp3
!sudo apt install git
!pip install librosa==0.7.2
!sudo apt-get install pciutils
!lspci | grep -i nvidia
!wget https://github.com/git-lfs/git-lfs/releases/download/v2.11.0/git-lfs-linux-amd64-v2.11.0.tar.gz
!tar xvf /content/git-lfs-linux-amd64-v2.11.0.tar.gz -C /content
!sudo bash /content/install.sh
%cd /content/DeepSpeech
!git-lfs pull
!wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl
!pip install /content/DeepSpeech/ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl
!pip3 install folium==0.2.1
!pip3 install --upgrade pip==20.0.2 wheel==0.34.2 setuptools==46.1.3
!pip3 install --upgrade --force-reinstall -e .
/content Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: python-pip-whl python3.6-venv The following NEW packages will be installed: python-pip-whl python3-venv python3.6-venv 0 upgraded, 3 newly installed, 0 to remove and 39 not upgraded. Need to get 1,660 kB of archives. After this operation, 1,902 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python-pip-whl all 9.0.1-2.3~ubuntu1.18.04.4 [1,653 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python3.6-venv amd64 3.6.9-1~18.04ubuntu1.4 [6,188 B] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python3-venv amd64 3.6.7-1~18.04 [1,208 B] Fetched 1,660 kB in 0s (15.4 MB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 3.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package python-pip-whl. (Reading database ... 160772 files and directories currently installed.) Preparing to unpack .../python-pip-whl_9.0.1-2.3~ubuntu1.18.04.4_all.deb ... Unpacking python-pip-whl (9.0.1-2.3~ubuntu1.18.04.4) ... Selecting previously unselected package python3.6-venv. Preparing to unpack .../python3.6-venv_3.6.9-1~18.04ubuntu1.4_amd64.deb ... Unpacking python3.6-venv (3.6.9-1~18.04ubuntu1.4) ... Selecting previously unselected package python3-venv. Preparing to unpack .../python3-venv_3.6.7-1~18.04_amd64.deb ... Unpacking python3-venv (3.6.7-1~18.04) ... Setting up python-pip-whl (9.0.1-2.3~ubuntu1.18.04.4) ... Setting up python3.6-venv (3.6.9-1~18.04ubuntu1.4) ... Setting up python3-venv (3.6.7-1~18.04) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Reading package lists... Done Building dependency tree Reading state information... Done python3-dev is already the newest version (3.6.7-1~18.04). python3-dev set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. Collecting pip Downloading https://files.pythonhosted.org/packages/cd/82/04e9aaf603fdbaecb4323b9e723f13c92c245f6ab2902195c53987848c78/pip-21.1.2-py3-none-any.whl (1.5MB) |████████████████████████████████| 1.6MB 12.2MB/s Installing collected packages: pip Found existing installation: pip 19.3.1 Uninstalling pip-19.3.1: Successfully uninstalled pip-19.3.1 Successfully installed pip-21.1.2 Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libmagic-mgc libmagic1 libopencore-amrnb0 libopencore-amrwb0 libsox-fmt-alsa libsox-fmt-base libsox3 Suggested packages: file libsox-fmt-all The following NEW packages will be installed: libmagic-mgc libmagic1 libopencore-amrnb0 libopencore-amrwb0 libsox-fmt-alsa libsox-fmt-base libsox3 sox 0 upgraded, 8 newly installed, 0 to remove and 39 not upgraded. Need to get 760 kB of archives. After this operation, 6,717 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libopencore-amrnb0 amd64 0.1.3-2.1 [92.0 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libopencore-amrwb0 amd64 0.1.3-2.1 [45.8 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagic-mgc amd64 1:5.32-2ubuntu0.4 [184 kB] Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagic1 amd64 1:5.32-2ubuntu0.4 [68.6 kB] Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox3 amd64 14.4.2-3ubuntu0.18.04.1 [226 kB] Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox-fmt-alsa amd64 14.4.2-3ubuntu0.18.04.1 [10.6 kB] Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox-fmt-base amd64 14.4.2-3ubuntu0.18.04.1 [32.1 kB] Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 sox amd64 14.4.2-3ubuntu0.18.04.1 [101 kB] Fetched 760 kB in 0s (8,907 kB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 8.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package libopencore-amrnb0:amd64. (Reading database ... 160810 files and directories currently installed.) Preparing to unpack .../0-libopencore-amrnb0_0.1.3-2.1_amd64.deb ... Unpacking libopencore-amrnb0:amd64 (0.1.3-2.1) ... Selecting previously unselected package libopencore-amrwb0:amd64. Preparing to unpack .../1-libopencore-amrwb0_0.1.3-2.1_amd64.deb ... Unpacking libopencore-amrwb0:amd64 (0.1.3-2.1) ... Selecting previously unselected package libmagic-mgc. Preparing to unpack .../2-libmagic-mgc_1%3a5.32-2ubuntu0.4_amd64.deb ... Unpacking libmagic-mgc (1:5.32-2ubuntu0.4) ... Selecting previously unselected package libmagic1:amd64. Preparing to unpack .../3-libmagic1_1%3a5.32-2ubuntu0.4_amd64.deb ... Unpacking libmagic1:amd64 (1:5.32-2ubuntu0.4) ... Selecting previously unselected package libsox3:amd64. Preparing to unpack .../4-libsox3_14.4.2-3ubuntu0.18.04.1_amd64.deb ... Unpacking libsox3:amd64 (14.4.2-3ubuntu0.18.04.1) ... Selecting previously unselected package libsox-fmt-alsa:amd64. Preparing to unpack .../5-libsox-fmt-alsa_14.4.2-3ubuntu0.18.04.1_amd64.deb ... Unpacking libsox-fmt-alsa:amd64 (14.4.2-3ubuntu0.18.04.1) ... Selecting previously unselected package libsox-fmt-base:amd64. Preparing to unpack .../6-libsox-fmt-base_14.4.2-3ubuntu0.18.04.1_amd64.deb ... Unpacking libsox-fmt-base:amd64 (14.4.2-3ubuntu0.18.04.1) ... Selecting previously unselected package sox. Preparing to unpack .../7-sox_14.4.2-3ubuntu0.18.04.1_amd64.deb ... Unpacking sox (14.4.2-3ubuntu0.18.04.1) ... Setting up libmagic-mgc (1:5.32-2ubuntu0.4) ... Setting up libmagic1:amd64 (1:5.32-2ubuntu0.4) ... Setting up libopencore-amrnb0:amd64 (0.1.3-2.1) ... Setting up libopencore-amrwb0:amd64 (0.1.3-2.1) ... Setting up libsox3:amd64 (14.4.2-3ubuntu0.18.04.1) ... Setting up libsox-fmt-base:amd64 (14.4.2-3ubuntu0.18.04.1) ... Setting up libsox-fmt-alsa:amd64 (14.4.2-3ubuntu0.18.04.1) ... Setting up sox (14.4.2-3ubuntu0.18.04.1) ... Processing triggers for libc-bin (2.27-3ubuntu1.2) ... /sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Processing triggers for mime-support (3.60ubuntu1) ... Reading package lists... Done Building dependency tree Reading state information... Done sox is already the newest version (14.4.2-3ubuntu0.18.04.1). The following additional packages will be installed: libid3tag0 libmad0 The following NEW packages will be installed: libid3tag0 libmad0 libsox-fmt-mp3 0 upgraded, 3 newly installed, 0 to remove and 39 not upgraded. Need to get 112 kB of archives. After this operation, 370 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libid3tag0 amd64 0.15.1b-13 [31.2 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libmad0 amd64 0.15.1b-9ubuntu18.04.1 [64.6 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox-fmt-mp3 amd64 14.4.2-3ubuntu0.18.04.1 [15.9 kB] Fetched 112 kB in 0s (1,588 kB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 3.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package libid3tag0:amd64. (Reading database ... 160898 files and directories currently installed.) Preparing to unpack .../libid3tag0_0.15.1b-13_amd64.deb ... Unpacking libid3tag0:amd64 (0.15.1b-13) ... Selecting previously unselected package libmad0:amd64. Preparing to unpack .../libmad0_0.15.1b-9ubuntu18.04.1_amd64.deb ... Unpacking libmad0:amd64 (0.15.1b-9ubuntu18.04.1) ... Selecting previously unselected package libsox-fmt-mp3:amd64. Preparing to unpack .../libsox-fmt-mp3_14.4.2-3ubuntu0.18.04.1_amd64.deb ... Unpacking libsox-fmt-mp3:amd64 (14.4.2-3ubuntu0.18.04.1) ... Setting up libid3tag0:amd64 (0.15.1b-13) ... Setting up libmad0:amd64 (0.15.1b-9ubuntu18.04.1) ... Setting up libsox-fmt-mp3:amd64 (14.4.2-3ubuntu0.18.04.1) ... Processing triggers for libc-bin (2.27-3ubuntu1.2) ... /sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link Reading package lists... Done Building dependency tree Reading state information... Done git is already the newest version (1:2.17.1-1ubuntu0.8). 0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. Collecting librosa==0.7.2 Downloading librosa-0.7.2.tar.gz (1.6 MB) |████████████████████████████████| 1.6 MB 13.8 MB/s Requirement already satisfied: audioread>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (2.1.9) Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.19.5) Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.4.1) Requirement already satisfied: scikit-learn!=0.19.0,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.22.2.post1) Requirement already satisfied: joblib>=0.12 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.0.1) Requirement already satisfied: decorator>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (4.4.2) Requirement already satisfied: six>=1.3 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.15.0) Requirement already satisfied: resampy>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.2.2) Requirement already satisfied: numba>=0.43.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.51.2) Requirement already satisfied: soundfile>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.10.3.post1) Requirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.43.0->librosa==0.7.2) (0.34.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.43.0->librosa==0.7.2) (57.0.0) Requirement already satisfied: cffi>=1.0 in /usr/local/lib/python3.7/dist-packages (from soundfile>=0.9.0->librosa==0.7.2) (1.14.5) Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.0->soundfile>=0.9.0->librosa==0.7.2) (2.20) Building wheels for collected packages: librosa Building wheel for librosa (setup.py) ... done Created wheel for librosa: filename=librosa-0.7.2-py3-none-any.whl size=1612900 sha256=373242076272be2317fe77f6a9f7053474ebf98df8c47585c043b2f5fbcbed22 Stored in directory: /root/.cache/pip/wheels/18/9e/42/3224f85730f92fa2925f0b4fb6ef7f9c5431a64dfc77b95b39 Successfully built librosa Installing collected packages: librosa Attempting uninstall: librosa Found existing installation: librosa 0.8.0 Uninstalling librosa-0.8.0: Successfully uninstalled librosa-0.8.0 Successfully installed librosa-0.7.2 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libpci3 The following NEW packages will be installed: libpci3 pciutils 0 upgraded, 2 newly installed, 0 to remove and 39 not upgraded. Need to get 281 kB of archives. After this operation, 1,430 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpci3 amd64 1:3.5.2-1ubuntu1.1 [24.1 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 pciutils amd64 1:3.5.2-1ubuntu1.1 [257 kB] Fetched 281 kB in 0s (3,627 kB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 2.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package libpci3:amd64. (Reading database ... 160920 files and directories currently installed.) Preparing to unpack .../libpci3_1%3a3.5.2-1ubuntu1.1_amd64.deb ... Unpacking libpci3:amd64 (1:3.5.2-1ubuntu1.1) ... Selecting previously unselected package pciutils. Preparing to unpack .../pciutils_1%3a3.5.2-1ubuntu1.1_amd64.deb ... Unpacking pciutils (1:3.5.2-1ubuntu1.1) ... Setting up libpci3:amd64 (1:3.5.2-1ubuntu1.1) ... Setting up pciutils (1:3.5.2-1ubuntu1.1) ... Processing triggers for libc-bin (2.27-3ubuntu1.2) ... /sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link Processing triggers for man-db (2.8.3-2ubuntu0.1) ... 00:04.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) --2021-06-10 17:36:09-- https://github.com/git-lfs/git-lfs/releases/download/v2.11.0/git-lfs-linux-amd64-v2.11.0.tar.gz Resolving github.com (github.com)... 140.82.121.4 Connecting to github.com (github.com)|140.82.121.4|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-releases.githubusercontent.com/13021798/fa85ce00-9147-11ea-9ec4-c204e7a4e6cd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210610%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210610T173609Z&X-Amz-Expires=300&X-Amz-Signature=fadd25b8e730c59cd8159b2362fb7cf82da09c665d29f28c4ef0dc39aa46c96f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=13021798&response-content-disposition=attachment%3B%20filename%3Dgit-lfs-linux-amd64-v2.11.0.tar.gz&response-content-type=application%2Foctet-stream [following] --2021-06-10 17:36:09-- https://github-releases.githubusercontent.com/13021798/fa85ce00-9147-11ea-9ec4-c204e7a4e6cd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210610%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210610T173609Z&X-Amz-Expires=300&X-Amz-Signature=fadd25b8e730c59cd8159b2362fb7cf82da09c665d29f28c4ef0dc39aa46c96f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=13021798&response-content-disposition=attachment%3B%20filename%3Dgit-lfs-linux-amd64-v2.11.0.tar.gz&response-content-type=application%2Foctet-stream Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.111.154, 185.199.109.154, 185.199.108.154, ... Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.111.154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 4666614 (4.5M) [application/octet-stream] Saving to: ‘git-lfs-linux-amd64-v2.11.0.tar.gz’ git-lfs-linux-amd64 100%[===================>] 4.45M 26.5MB/s in 0.2s 2021-06-10 17:36:10 (26.5 MB/s) - ‘git-lfs-linux-amd64-v2.11.0.tar.gz’ saved [4666614/4666614] README.md CHANGELOG.md git-lfs install.sh Git LFS initialized. /content/DeepSpeech --2021-06-10 17:36:11-- https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl Resolving github.com (github.com)... 140.82.121.3 Connecting to github.com (github.com)|140.82.121.3|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-releases.githubusercontent.com/60273704/743cbd00-b1c6-11ea-96f6-79d96377b886?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210610%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210610T173611Z&X-Amz-Expires=300&X-Amz-Signature=8e31e394129052a92e94d3dd011b0c405ab716c27b714359c5f4aeee0bae85b6&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=60273704&response-content-disposition=attachment%3B%20filename%3Dds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl&response-content-type=application%2Foctet-stream [following] --2021-06-10 17:36:11-- https://github-releases.githubusercontent.com/60273704/743cbd00-b1c6-11ea-96f6-79d96377b886?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210610%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210610T173611Z&X-Amz-Expires=300&X-Amz-Signature=8e31e394129052a92e94d3dd011b0c405ab716c27b714359c5f4aeee0bae85b6&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=60273704&response-content-disposition=attachment%3B%20filename%3Dds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl&response-content-type=application%2Foctet-stream Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.108.154, 185.199.109.154, 185.199.110.154, ... Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1852926 (1.8M) [application/octet-stream] Saving to: ‘ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl’ ds_ctcdecoder-0.7.4 100%[===================>] 1.77M --.-KB/s in 0.07s 2021-06-10 17:36:11 (23.6 MB/s) - ‘ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl’ saved [1852926/1852926] ERROR: ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform. Collecting folium==0.2.1 Downloading folium-0.2.1.tar.gz (69 kB) |████████████████████████████████| 69 kB 310 kB/s Requirement already satisfied: Jinja2 in /usr/local/lib/python3.7/dist-packages (from folium==0.2.1) (2.11.3) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2->folium==0.2.1) (2.0.1) Building wheels for collected packages: folium Building wheel for folium (setup.py) ... done Created wheel for folium: filename=folium-0.2.1-py3-none-any.whl size=79808 sha256=7eb11651a251909cb753c5947e9090c2f31b244bbfebf8c27563fa37ea32e9bb Stored in directory: /root/.cache/pip/wheels/9a/f0/3a/3f79a6914ff5affaf50cabad60c9f4d565283283c97f0bdccf Successfully built folium Installing collected packages: folium Attempting uninstall: folium Found existing installation: folium 0.8.3 Uninstalling folium-0.8.3: Successfully uninstalled folium-0.8.3 Successfully installed folium-0.2.1 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv Collecting pip==20.0.2 Downloading pip-20.0.2-py2.py3-none-any.whl (1.4 MB) |████████████████████████████████| 1.4 MB 12.2 MB/s Collecting wheel==0.34.2 Downloading wheel-0.34.2-py2.py3-none-any.whl (26 kB) Collecting setuptools==46.1.3 Downloading setuptools-46.1.3-py3-none-any.whl (582 kB) |████████████████████████████████| 582 kB 41.1 MB/s Installing collected packages: wheel, setuptools, pip Attempting uninstall: wheel Found existing installation: wheel 0.36.2 Uninstalling wheel-0.36.2: Successfully uninstalled wheel-0.36.2 Attempting uninstall: setuptools Found existing installation: setuptools 57.0.0 Uninstalling setuptools-57.0.0: Successfully uninstalled setuptools-57.0.0 Attempting uninstall: pip Found existing installation: pip 21.1.2 Uninstalling pip-21.1.2: Successfully uninstalled pip-21.1.2 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow 2.5.0 requires wheel~=0.35, but you have wheel 0.34.2 which is incompatible. google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.25.1 which is incompatible. Successfully installed pip-20.0.2 setuptools-46.1.3 wheel-0.34.2 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
Obtaining file:///content/DeepSpeech Collecting numpy Downloading numpy-1.20.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.3 MB) |████████████████████████████████| 15.3 MB 168 kB/s Collecting progressbar2 Downloading progressbar2-3.53.1-py2.py3-none-any.whl (25 kB) Collecting six Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting pyxdg Downloading pyxdg-0.27-py2.py3-none-any.whl (49 kB) |████████████████████████████████| 49 kB 5.3 MB/s Collecting attrdict Downloading attrdict-2.0.1-py2.py3-none-any.whl (9.9 kB) Collecting absl-py Downloading absl_py-0.12.0-py3-none-any.whl (129 kB) |████████████████████████████████| 129 kB 50.8 MB/s Collecting semver Downloading semver-2.13.0-py2.py3-none-any.whl (12 kB) Collecting opuslib==2.0.0 Downloading opuslib-2.0.0.tar.gz (7.3 kB) Collecting optuna Downloading optuna-2.8.0-py3-none-any.whl (301 kB) |████████████████████████████████| 301 kB 44.0 MB/s Collecting sox Downloading sox-1.4.1-py2.py3-none-any.whl (39 kB) Collecting bs4 Downloading bs4-0.0.1.tar.gz (1.1 kB) Collecting pandas Downloading pandas-1.2.4-cp37-cp37m-manylinux1_x86_64.whl (9.9 MB) |████████████████████████████████| 9.9 MB 32.0 MB/s Collecting requests Using cached requests-2.25.1-py2.py3-none-any.whl (61 kB) Collecting numba==0.47.0 Downloading numba-0.47.0-cp37-cp37m-manylinux1_x86_64.whl (3.7 MB) |████████████████████████████████| 3.7 MB 40.3 MB/s Collecting llvmlite==0.31.0 Downloading llvmlite-0.31.0-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB) |████████████████████████████████| 20.2 MB 40.9 MB/s Collecting librosa Downloading librosa-0.8.1-py3-none-any.whl (203 kB) |████████████████████████████████| 203 kB 49.9 MB/s Collecting soundfile Downloading SoundFile-0.10.3.post1-py2.py3-none-any.whl (21 kB) Collecting ds_ctcdecoder==0.9.3 Downloading ds_ctcdecoder-0.9.3-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB) |████████████████████████████████| 2.1 MB 34.2 MB/s Collecting tensorflow==1.15.4 Downloading tensorflow-1.15.4-cp37-cp37m-manylinux2010_x86_64.whl (110.5 MB) |████████████████████████████████| 110.5 MB 14 kB/s Collecting python-utils>=2.3.0 Downloading python_utils-2.5.6-py2.py3-none-any.whl (12 kB) Collecting scipy!=1.4.0 Downloading scipy-1.6.3-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB) |████████████████████████████████| 27.4 MB 100 kB/s Collecting cliff Downloading cliff-3.8.0-py3-none-any.whl (80 kB) |████████████████████████████████| 80 kB 8.2 MB/s Collecting tqdm Using cached tqdm-4.61.0-py2.py3-none-any.whl (75 kB) Collecting colorlog Downloading colorlog-5.0.1-py2.py3-none-any.whl (10 kB) Collecting alembic Downloading alembic-1.6.5-py2.py3-none-any.whl (164 kB) |████████████████████████████████| 164 kB 50.4 MB/s Collecting cmaes>=0.8.2 Downloading cmaes-0.8.2-py3-none-any.whl (15 kB) Collecting packaging>=20.0 Downloading packaging-20.9-py2.py3-none-any.whl (40 kB) |████████████████████████████████| 40 kB 5.3 MB/s Collecting sqlalchemy>=1.1.0 Downloading SQLAlchemy-1.4.17-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.5 MB) |████████████████████████████████| 1.5 MB 36.4 MB/s Collecting beautifulsoup4 Downloading beautifulsoup4-4.9.3-py3-none-any.whl (115 kB) |████████████████████████████████| 115 kB 51.3 MB/s Collecting python-dateutil>=2.7.3 Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB) |████████████████████████████████| 227 kB 51.8 MB/s Collecting pytz>=2017.3 Downloading pytz-2021.1-py2.py3-none-any.whl (510 kB) |████████████████████████████████| 510 kB 49.6 MB/s Collecting certifi>=2017.4.17 Downloading certifi-2021.5.30-py2.py3-none-any.whl (145 kB) |████████████████████████████████| 145 kB 52.9 MB/s Collecting idna<3,>=2.5 Downloading idna-2.10-py2.py3-none-any.whl (58 kB) |████████████████████████████████| 58 kB 5.1 MB/s Collecting chardet<5,>=3.0.2 Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB) |████████████████████████████████| 178 kB 55.2 MB/s Collecting urllib3<1.27,>=1.21.1 Downloading urllib3-1.26.5-py2.py3-none-any.whl (138 kB) |████████████████████████████████| 138 kB 49.1 MB/s Collecting setuptools Downloading setuptools-57.0.0-py3-none-any.whl (821 kB) |████████████████████████████████| 821 kB 43.7 MB/s Collecting decorator>=3.0.0 Downloading decorator-5.0.9-py3-none-any.whl (8.9 kB) Collecting pooch>=1.0 Downloading pooch-1.4.0-py3-none-any.whl (51 kB) |████████████████████████████████| 51 kB 524 kB/s Collecting joblib>=0.14 Downloading joblib-1.0.1-py3-none-any.whl (303 kB) |████████████████████████████████| 303 kB 49.0 MB/s Collecting resampy>=0.2.2 Downloading resampy-0.2.2.tar.gz (323 kB) |████████████████████████████████| 323 kB 53.5 MB/s Collecting audioread>=2.0.0 Downloading audioread-2.1.9.tar.gz (377 kB) |████████████████████████████████| 377 kB 48.0 MB/s Collecting scikit-learn!=0.19.0,>=0.14.0 Downloading scikit_learn-0.24.2-cp37-cp37m-manylinux2010_x86_64.whl (22.3 MB) |████████████████████████████████| 22.3 MB 43.0 MB/s Collecting cffi>=1.0 Downloading cffi-1.14.5-cp37-cp37m-manylinux1_x86_64.whl (402 kB) |████████████████████████████████| 402 kB 47.7 MB/s Collecting google-pasta>=0.1.6 Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB) |████████████████████████████████| 57 kB 5.3 MB/s Collecting tensorflow-estimator==1.15.1 Downloading tensorflow_estimator-1.15.1-py2.py3-none-any.whl (503 kB) |████████████████████████████████| 503 kB 46.9 MB/s Collecting grpcio>=1.8.6 Downloading grpcio-1.38.0-cp37-cp37m-manylinux2014_x86_64.whl (4.2 MB) |████████████████████████████████| 4.2 MB 37.6 MB/s Collecting keras-applications>=1.0.8 Downloading Keras_Applications-1.0.8-py3-none-any.whl (50 kB) |████████████████████████████████| 50 kB 6.4 MB/s Collecting tensorboard<1.16.0,>=1.15.0 Downloading tensorboard-1.15.0-py3-none-any.whl (3.8 MB) |████████████████████████████████| 3.8 MB 38.2 MB/s Collecting astor>=0.6.0 Downloading astor-0.8.1-py2.py3-none-any.whl (27 kB) Collecting wrapt>=1.11.1 Downloading wrapt-1.12.1.tar.gz (27 kB) Collecting wheel>=0.26; python_version >= "3" Downloading wheel-0.36.2-py2.py3-none-any.whl (35 kB) Collecting protobuf>=3.6.1 Downloading protobuf-3.17.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB) |████████████████████████████████| 1.0 MB 47.3 MB/s Collecting gast==0.2.2 Downloading gast-0.2.2.tar.gz (10 kB) Collecting opt-einsum>=2.3.2 Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB) |████████████████████████████████| 65 kB 3.7 MB/s Collecting termcolor>=1.1.0 Downloading termcolor-1.1.0.tar.gz (3.9 kB) Collecting keras-preprocessing>=1.0.5 Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) |████████████████████████████████| 42 kB 1.1 MB/s Collecting PrettyTable>=0.7.2 Downloading prettytable-2.1.0-py3-none-any.whl (22 kB) Collecting stevedore>=2.0.1 Downloading stevedore-3.3.0-py3-none-any.whl (49 kB) |████████████████████████████████| 49 kB 5.3 MB/s Collecting pyparsing>=2.1.0 Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB) |████████████████████████████████| 67 kB 5.5 MB/s Collecting cmd2>=1.0.0 Downloading cmd2-2.0.1-py3-none-any.whl (139 kB) |████████████████████████████████| 139 kB 51.7 MB/s Collecting PyYAML>=3.12 Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB) |████████████████████████████████| 636 kB 42.4 MB/s Collecting pbr!=2.1.0,>=2.0.0 Downloading pbr-5.6.0-py2.py3-none-any.whl (111 kB) |████████████████████████████████| 111 kB 52.8 MB/s Collecting python-editor>=0.3 Downloading python_editor-1.0.4-py3-none-any.whl (4.9 kB) Collecting Mako Downloading Mako-1.1.4-py2.py3-none-any.whl (75 kB) |████████████████████████████████| 75 kB 4.1 MB/s Collecting importlib-metadata; python_version < "3.8" Downloading importlib_metadata-4.5.0-py3-none-any.whl (17 kB) Collecting greenlet!=0.4.17; python_version >= "3" Downloading greenlet-1.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (160 kB) |████████████████████████████████| 160 kB 54.6 MB/s Collecting soupsieve>1.2; python_version >= "3.0" Downloading soupsieve-2.2.1-py3-none-any.whl (33 kB) Collecting appdirs Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Collecting threadpoolctl>=2.0.0 Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB) Collecting pycparser Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB) |████████████████████████████████| 112 kB 53.1 MB/s Collecting h5py Downloading h5py-3.2.1-cp37-cp37m-manylinux1_x86_64.whl (4.1 MB) |████████████████████████████████| 4.1 MB 33.8 MB/s Collecting werkzeug>=0.11.15 Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB) |████████████████████████████████| 288 kB 41.9 MB/s Collecting markdown>=2.6.8 Downloading Markdown-3.3.4-py3-none-any.whl (97 kB) |████████████████████████████████| 97 kB 6.8 MB/s Collecting wcwidth Downloading wcwidth-0.2.5-py2.py3-none-any.whl (30 kB) Collecting typing-extensions; python_version < "3.8" Downloading typing_extensions-3.10.0.0-py3-none-any.whl (26 kB) Collecting pyperclip>=1.6 Downloading pyperclip-1.8.2.tar.gz (20 kB) Collecting colorama>=0.3.7 Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB) Collecting attrs>=16.3.0 Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB) |████████████████████████████████| 53 kB 2.1 MB/s Collecting MarkupSafe>=0.9.2 Downloading MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_x86_64.whl (31 kB) Collecting zipp>=0.5 Downloading zipp-3.4.1-py3-none-any.whl (5.2 kB) Collecting cached-property; python_version < "3.8" Downloading cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB) Building wheels for collected packages: opuslib, bs4, resampy, audioread, wrapt, gast, termcolor, pyperclip Building wheel for opuslib (setup.py) ... done Created wheel for opuslib: filename=opuslib-2.0.0-py3-none-any.whl size=11009 sha256=91f34484cd0963b70be25888495d98e169d799896f10052dc195c7c925de87f6 Stored in directory: /root/.cache/pip/wheels/e5/ba/d4/0e81231a9797fbb262ae3a54fd761fab850db7f32d94a3283a Building wheel for bs4 (setup.py) ... done Created wheel for bs4: filename=bs4-0.0.1-py3-none-any.whl size=1272 sha256=709a0e424b2293a6f645b6b5d09c2894f0cb3377ab31b6e66df7f3783a8ef681 Stored in directory: /root/.cache/pip/wheels/0a/9e/ba/20e5bbc1afef3a491f0b3bb74d508f99403aabe76eda2167ca Building wheel for resampy (setup.py) ... done Created wheel for resampy: filename=resampy-0.2.2-py3-none-any.whl size=320720 sha256=aff3ff3bc2812b5d62494644be80df626ce7a1999d38e205af7358a1afb598d5 Stored in directory: /root/.cache/pip/wheels/a0/18/0a/8ad18a597d8333a142c9789338a96a6208f1198d290ece356c Building wheel for audioread (setup.py) ... done Created wheel for audioread: filename=audioread-2.1.9-py3-none-any.whl size=23142 sha256=c950d8c6c2c60fd02c90187d6d79dba7855d5cca6f10546df32e3c5e4e24b666 Stored in directory: /root/.cache/pip/wheels/ba/7b/eb/213741ccc0678f63e346ab8dff10495995ca3f426af87b8d88 Building wheel for wrapt (setup.py) ... done Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_x86_64.whl size=68671 sha256=98b67d530cb2476843b966346504c63884b9afd631d6b12980d08628e5802498 Stored in directory: /root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6 Building wheel for gast (setup.py) ... done Created wheel for gast: filename=gast-0.2.2-py3-none-any.whl size=7539 sha256=44ae2f2260441de2e837bfbc5bbe7a193d580a157a5b926a6e9e7bb0dbea68d8 Stored in directory: /root/.cache/pip/wheels/21/7f/02/420f32a803f7d0967b48dd823da3f558c5166991bfd204eef3 Building wheel for termcolor (setup.py) ... done Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=736b3ab6a607620af67d05ba978e5471f57203ea4337d706ca3cacdbadb111e5 Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2 Building wheel for pyperclip (setup.py) ... done Created wheel for pyperclip: filename=pyperclip-1.8.2-py3-none-any.whl size=11107 sha256=af7b46b6abdb2cf29caff9a5b4065445cfd4ac0bd3c61524c5c663c52a9aeb27 Stored in directory: /root/.cache/pip/wheels/9f/18/84/8f69f8b08169c7bae2dde6bd7daf0c19fca8c8e500ee620a28 Successfully built opuslib bs4 resampy audioread wrapt gast termcolor pyperclip ERROR: tensorflow 1.15.4 has requirement numpy<1.19.0,>=1.16.0, but you'll have numpy 1.20.3 which is incompatible. ERROR: tensorflow-probability 0.12.1 has requirement gast>=0.3.2, but you'll have gast 0.2.2 which is incompatible. ERROR: networkx 2.5.1 has requirement decorator<5,>=4.3, but you'll have decorator 5.0.9 which is incompatible. ERROR: moviepy 0.2.3.5 has requirement decorator<5.0,>=4.0.2, but you'll have decorator 5.0.9 which is incompatible. ERROR: kapre 0.3.5 has requirement tensorflow>=2.0.0, but you'll have tensorflow 1.15.4 which is incompatible. ERROR: google-colab 1.0.0 has requirement pandas~=1.1.0; python_version >= "3.0", but you'll have pandas 1.2.4 which is incompatible. ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible. ERROR: google-colab 1.0.0 has requirement six~=1.15.0, but you'll have six 1.16.0 which is incompatible. ERROR: flask 1.1.4 has requirement Werkzeug<2.0,>=0.15, but you'll have werkzeug 2.0.1 which is incompatible. ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible. Installing collected packages: numpy, six, python-utils, progressbar2, pyxdg, attrdict, absl-py, semver, opuslib, scipy, zipp, typing-extensions, importlib-metadata, wcwidth, PrettyTable, pbr, stevedore, pyparsing, pyperclip, colorama, attrs, cmd2, PyYAML, cliff, tqdm, colorlog, python-editor, MarkupSafe, Mako, python-dateutil, greenlet, sqlalchemy, alembic, cmaes, packaging, optuna, sox, soupsieve, beautifulsoup4, bs4, pytz, pandas, certifi, idna, chardet, urllib3, requests, llvmlite, setuptools, numba, decorator, appdirs, pooch, joblib, resampy, audioread, pycparser, cffi, soundfile, threadpoolctl, scikit-learn, librosa, ds-ctcdecoder, google-pasta, tensorflow-estimator, grpcio, cached-property, h5py, keras-applications, werkzeug, wheel, protobuf, markdown, tensorboard, astor, wrapt, gast, opt-einsum, termcolor, keras-preprocessing, tensorflow, deepspeech-training Attempting uninstall: numpy Found existing installation: numpy 1.19.5 Uninstalling numpy-1.19.5: Successfully uninstalled numpy-1.19.5 Attempting uninstall: six Found existing installation: six 1.15.0 Uninstalling six-1.15.0: Successfully uninstalled six-1.15.0 Attempting uninstall: python-utils Found existing installation: python-utils 2.5.6 Uninstalling python-utils-2.5.6: Successfully uninstalled python-utils-2.5.6 Attempting uninstall: progressbar2 Found existing installation: progressbar2 3.38.0 Uninstalling progressbar2-3.38.0: Successfully uninstalled progressbar2-3.38.0 Attempting uninstall: absl-py Found existing installation: absl-py 0.12.0 Uninstalling absl-py-0.12.0: Successfully uninstalled absl-py-0.12.0 Attempting uninstall: semver Found existing installation: semver 2.13.0 Uninstalling semver-2.13.0: Successfully uninstalled semver-2.13.0 Attempting uninstall: scipy Found existing installation: scipy 1.4.1 Uninstalling scipy-1.4.1: Successfully uninstalled scipy-1.4.1 Attempting uninstall: zipp Found existing installation: zipp 3.4.1 Uninstalling zipp-3.4.1: Successfully uninstalled zipp-3.4.1 Attempting uninstall: typing-extensions Found existing installation: typing-extensions 3.7.4.3 Uninstalling typing-extensions-3.7.4.3: Successfully uninstalled typing-extensions-3.7.4.3 Attempting uninstall: importlib-metadata Found existing installation: importlib-metadata 4.0.1 Uninstalling importlib-metadata-4.0.1: Successfully uninstalled importlib-metadata-4.0.1 Attempting uninstall: wcwidth Found existing installation: wcwidth 0.2.5 Uninstalling wcwidth-0.2.5: Successfully uninstalled wcwidth-0.2.5 Attempting uninstall: PrettyTable Found existing installation: prettytable 2.1.0 Uninstalling prettytable-2.1.0: Successfully uninstalled prettytable-2.1.0 Attempting uninstall: pyparsing Found existing installation: pyparsing 2.4.7 Uninstalling pyparsing-2.4.7: Successfully uninstalled pyparsing-2.4.7 Attempting uninstall: colorama Found existing installation: colorama 0.4.4 Uninstalling colorama-0.4.4: Successfully uninstalled colorama-0.4.4 Attempting uninstall: attrs Found existing installation: attrs 21.2.0 Uninstalling attrs-21.2.0: Successfully uninstalled attrs-21.2.0 Attempting uninstall: PyYAML Found existing installation: PyYAML 3.13 Uninstalling PyYAML-3.13: Successfully uninstalled PyYAML-3.13 Attempting uninstall: tqdm Found existing installation: tqdm 4.61.0 Uninstalling tqdm-4.61.0: Successfully uninstalled tqdm-4.61.0 Attempting uninstall: MarkupSafe Found existing installation: MarkupSafe 2.0.1 Uninstalling MarkupSafe-2.0.1: Successfully uninstalled MarkupSafe-2.0.1 Attempting uninstall: python-dateutil Found existing installation: python-dateutil 2.8.1 Uninstalling python-dateutil-2.8.1: Successfully uninstalled python-dateutil-2.8.1 Attempting uninstall: greenlet Found existing installation: greenlet 1.1.0 Uninstalling greenlet-1.1.0: Successfully uninstalled greenlet-1.1.0 Attempting uninstall: sqlalchemy Found existing installation: SQLAlchemy 1.4.15 Uninstalling SQLAlchemy-1.4.15: Successfully uninstalled SQLAlchemy-1.4.15 Attempting uninstall: packaging Found existing installation: packaging 20.9 Uninstalling packaging-20.9: Successfully uninstalled packaging-20.9 Attempting uninstall: beautifulsoup4 Found existing installation: beautifulsoup4 4.6.3 Uninstalling beautifulsoup4-4.6.3: Successfully uninstalled beautifulsoup4-4.6.3 Attempting uninstall: bs4 Found existing installation: bs4 0.0.1 Uninstalling bs4-0.0.1: Successfully uninstalled bs4-0.0.1 Attempting uninstall: pytz Found existing installation: pytz 2018.9 Uninstalling pytz-2018.9: Successfully uninstalled pytz-2018.9 Attempting uninstall: pandas Found existing installation: pandas 1.1.5 Uninstalling pandas-1.1.5: Successfully uninstalled pandas-1.1.5 Attempting uninstall: certifi Found existing installation: certifi 2020.12.5 Uninstalling certifi-2020.12.5: Successfully uninstalled certifi-2020.12.5 Attempting uninstall: idna Found existing installation: idna 2.10 Uninstalling idna-2.10: Successfully uninstalled idna-2.10 Attempting uninstall: chardet Found existing installation: chardet 3.0.4 Uninstalling chardet-3.0.4: Successfully uninstalled chardet-3.0.4 Attempting uninstall: urllib3 Found existing installation: urllib3 1.24.3 Uninstalling urllib3-1.24.3: Successfully uninstalled urllib3-1.24.3 Attempting uninstall: requests Found existing installation: requests 2.25.1 Uninstalling requests-2.25.1: Successfully uninstalled requests-2.25.1 Attempting uninstall: llvmlite Found existing installation: llvmlite 0.34.0 Uninstalling llvmlite-0.34.0: Successfully uninstalled llvmlite-0.34.0 Attempting uninstall: setuptools Found existing installation: setuptools 46.1.3 Uninstalling setuptools-46.1.3: Successfully uninstalled setuptools-46.1.3 Attempting uninstall: numba Found existing installation: numba 0.51.2 Uninstalling numba-0.51.2: Successfully uninstalled numba-0.51.2 Attempting uninstall: decorator Found existing installation: decorator 4.4.2 Uninstalling decorator-4.4.2: Successfully uninstalled decorator-4.4.2 Attempting uninstall: appdirs Found existing installation: appdirs 1.4.4 Uninstalling appdirs-1.4.4: Successfully uninstalled appdirs-1.4.4 Attempting uninstall: pooch Found existing installation: pooch 1.3.0 Uninstalling pooch-1.3.0: Successfully uninstalled pooch-1.3.0 Attempting uninstall: joblib Found existing installation: joblib 1.0.1 Uninstalling joblib-1.0.1: Successfully uninstalled joblib-1.0.1 Attempting uninstall: resampy Found existing installation: resampy 0.2.2 Uninstalling resampy-0.2.2: Successfully uninstalled resampy-0.2.2 Attempting uninstall: audioread Found existing installation: audioread 2.1.9 Uninstalling audioread-2.1.9: Successfully uninstalled audioread-2.1.9 Attempting uninstall: pycparser Found existing installation: pycparser 2.20 Uninstalling pycparser-2.20: Successfully uninstalled pycparser-2.20 Attempting uninstall: cffi Found existing installation: cffi 1.14.5 Uninstalling cffi-1.14.5: Successfully uninstalled cffi-1.14.5 Attempting uninstall: soundfile Found existing installation: SoundFile 0.10.3.post1 Uninstalling SoundFile-0.10.3.post1: Successfully uninstalled SoundFile-0.10.3.post1 Attempting uninstall: scikit-learn Found existing installation: scikit-learn 0.22.2.post1 Uninstalling scikit-learn-0.22.2.post1: Successfully uninstalled scikit-learn-0.22.2.post1 Attempting uninstall: librosa Found existing installation: librosa 0.7.2 Uninstalling librosa-0.7.2: Successfully uninstalled librosa-0.7.2 Attempting uninstall: google-pasta Found existing installation: google-pasta 0.2.0 Uninstalling google-pasta-0.2.0: Successfully uninstalled google-pasta-0.2.0 Attempting uninstall: tensorflow-estimator Found existing installation: tensorflow-estimator 2.5.0 Uninstalling tensorflow-estimator-2.5.0: Successfully uninstalled tensorflow-estimator-2.5.0 Attempting uninstall: grpcio Found existing installation: grpcio 1.34.1 Uninstalling grpcio-1.34.1: Successfully uninstalled grpcio-1.34.1 Attempting uninstall: cached-property Found existing installation: cached-property 1.5.2 Uninstalling cached-property-1.5.2: Successfully uninstalled cached-property-1.5.2 Attempting uninstall: h5py Found existing installation: h5py 3.1.0 Uninstalling h5py-3.1.0: Successfully uninstalled h5py-3.1.0 Attempting uninstall: werkzeug Found existing installation: Werkzeug 1.0.1 Uninstalling Werkzeug-1.0.1: Successfully uninstalled Werkzeug-1.0.1 Attempting uninstall: wheel Found existing installation: wheel 0.34.2 Uninstalling wheel-0.34.2: Successfully uninstalled wheel-0.34.2 Attempting uninstall: protobuf Found existing installation: protobuf 3.12.4 Uninstalling protobuf-3.12.4: Successfully uninstalled protobuf-3.12.4 Attempting uninstall: markdown Found existing installation: Markdown 3.3.4 Uninstalling Markdown-3.3.4: Successfully uninstalled Markdown-3.3.4 Attempting uninstall: tensorboard Found existing installation: tensorboard 2.5.0 Uninstalling tensorboard-2.5.0: Successfully uninstalled tensorboard-2.5.0 Attempting uninstall: astor Found existing installation: astor 0.8.1 Uninstalling astor-0.8.1: Successfully uninstalled astor-0.8.1 Attempting uninstall: wrapt Found existing installation: wrapt 1.12.1 Uninstalling wrapt-1.12.1: Successfully uninstalled wrapt-1.12.1 Attempting uninstall: gast Found existing installation: gast 0.4.0 Uninstalling gast-0.4.0: Successfully uninstalled gast-0.4.0 Attempting uninstall: opt-einsum Found existing installation: opt-einsum 3.3.0 Uninstalling opt-einsum-3.3.0: Successfully uninstalled opt-einsum-3.3.0 Attempting uninstall: termcolor Found existing installation: termcolor 1.1.0 Uninstalling termcolor-1.1.0: Successfully uninstalled termcolor-1.1.0 Attempting uninstall: keras-preprocessing Found existing installation: Keras-Preprocessing 1.1.2 Uninstalling Keras-Preprocessing-1.1.2: Successfully uninstalled Keras-Preprocessing-1.1.2 Attempting uninstall: tensorflow Found existing installation: tensorflow 2.5.0 Uninstalling tensorflow-2.5.0: Successfully uninstalled tensorflow-2.5.0 Running setup.py develop for deepspeech-training Successfully installed Mako-1.1.4 MarkupSafe-2.0.1 PrettyTable-2.1.0 PyYAML-5.4.1 absl-py-0.12.0 alembic-1.6.5 appdirs-1.4.4 astor-0.8.1 attrdict-2.0.1 attrs-21.2.0 audioread-2.1.9 beautifulsoup4-4.9.3 bs4-0.0.1 cached-property-1.5.2 certifi-2021.5.30 cffi-1.14.5 chardet-4.0.0 cliff-3.8.0 cmaes-0.8.2 cmd2-2.0.1 colorama-0.4.4 colorlog-5.0.1 decorator-5.0.9 deepspeech-training ds-ctcdecoder-0.9.3 gast-0.2.2 google-pasta-0.2.0 greenlet-1.1.0 grpcio-1.38.0 h5py-3.2.1 idna-2.10 importlib-metadata-4.5.0 joblib-1.0.1 keras-applications-1.0.8 keras-preprocessing-1.1.2 librosa-0.8.1 llvmlite-0.31.0 markdown-3.3.4 numba-0.47.0 numpy-1.20.3 opt-einsum-3.3.0 optuna-2.8.0 opuslib-2.0.0 packaging-20.9 pandas-1.2.4 pbr-5.6.0 pooch-1.4.0 progressbar2-3.53.1 protobuf-3.17.3 pycparser-2.20 pyparsing-2.4.7 pyperclip-1.8.2 python-dateutil-2.8.1 python-editor-1.0.4 python-utils-2.5.6 pytz-2021.1 pyxdg-0.27 requests-2.25.1 resampy-0.2.2 scikit-learn-0.24.2 scipy-1.6.3 semver-2.13.0 setuptools-57.0.0 six-1.16.0 soundfile-0.10.3.post1 soupsieve-2.2.1 sox-1.4.1 sqlalchemy-1.4.17 stevedore-3.3.0 tensorboard-1.15.0 tensorflow-1.15.4 tensorflow-estimator-1.15.1 termcolor-1.1.0 threadpoolctl-2.1.0 tqdm-4.61.0 typing-extensions-3.10.0.0 urllib3-1.26.5 wcwidth-0.2.5 werkzeug-2.0.1 wheel-0.36.2 wrapt-1.12.1 zipp-3.4.1
The colab notebook will be restarted by running the cell. Continue by running the below cells after the colab has restarted
!nvcc --version
!nvidia-smi
# Restarting the Runtine, run only below cells after colab has restarted
import os
os.kill(os.getpid(), 9)
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:09_PDT_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.TC445_37.28845127_0 Thu Jun 10 17:39:22 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 465.27 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 32C P8 29W / 149W | 0MiB / 11441MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
Set default CUDA version¶
- A input will be asking a confirmation for changing CUDA, Press Y
# Default CUDA version in Colab is 10.1, need to change to 10.0
! echo $PATH
import os
os.environ['PATH'] += ":/usr/local/cuda-10.0/bin"
os.environ['CUDADIR'] = "/usr/local/cuda-10.0"
os.environ['LD_LIBRARY_PATH'] = "/usr/lib64-nvidia:/usr/local/cuda-10.0/lib64"
!echo $PATH
!echo $LD_LIBRARY_PATH
!source ~/.bashrc
!env | grep -i cuda
%cd /content/
!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
!sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev
!sudo apt-get install build-essential dkms
!sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
!sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
!sudo apt-get update
!sudo apt-get install cuda-10-0
!sudo rm /usr/local/cuda
!sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda
%ls -l /usr/local/
!pip3 uninstall tensorflow -y
!pip3 install 'tensorflow-gpu==1.15.2'
/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/cuda-10.0/bin /usr/lib64-nvidia:/usr/local/cuda-10.0/lib64 LD_LIBRARY_PATH=/usr/lib64-nvidia:/usr/local/cuda-10.0/lib64 CUDADIR=/usr/local/cuda-10.0 LIBRARY_PATH=/usr/local/cuda/lib64/stubs CUDA_VERSION=11.0.3 NVIDIA_REQUIRE_CUDA=cuda>=11.0 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/cuda-10.0/bin /content --2021-06-10 17:39:29-- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 152.199.20.126 Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|152.199.20.126|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2940 (2.9K) [application/x-deb] Saving to: ‘cuda-repo-ubuntu1804_10.0.130-1_amd64.deb’ cuda-repo-ubuntu180 100%[===================>] 2.87K --.-KB/s in 0s 2021-06-10 17:39:29 (104 MB/s) - ‘cuda-repo-ubuntu1804_10.0.130-1_amd64.deb’ saved [2940/2940] Reading package lists... Done Building dependency tree Reading state information... Done libxi-dev is already the newest version (2:1.7.9-1). libxi-dev set to manually installed. libxmu-dev is already the newest version (2:1.1.2-2). libxmu-dev set to manually installed. freeglut3 is already the newest version (2.8.1-3). freeglut3 set to manually installed. freeglut3-dev is already the newest version (2.8.1-3). freeglut3-dev set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. Reading package lists... Done Building dependency tree Reading state information... Done build-essential is already the newest version (12.4ubuntu1). dkms is already the newest version (2.3-3ubuntu9.7). dkms set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. Selecting previously unselected package cuda-repo-ubuntu1804. (Reading database ... 160942 files and directories currently installed.) Preparing to unpack cuda-repo-ubuntu1804_10.0.130-1_amd64.deb ... Unpacking cuda-repo-ubuntu1804 (10.0.130-1) ... Setting up cuda-repo-ubuntu1804 (10.0.130-1) ... Configuration file '/etc/apt/sources.list.d/cuda.list' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** cuda.list (Y/I/N/O/D/Z) [default=N] ? y Installing new version of config file /etc/apt/sources.list.d/cuda.list ... Executing: /tmp/apt-key-gpghome.SDNIz3y8sj/gpg.1.sh --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub gpg: requesting key from 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub' gpg: key F60F4B3D7FA2AF80: "cudatools <cudatools@nvidia.com>" not changed gpg: Total number processed: 1 gpg: unchanged: 1 Get:1 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB] Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Hit:4 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease Get:5 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease [15.9 kB] Get:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Get:7 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B] Get:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB] Get:9 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Get:10 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,770 kB] Ign:11 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease Get:12 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [697 B] Get:13 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [906 kB] Get:14 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B] Ign:15 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Hit:16 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,616 kB] Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [480 kB] Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [33.5 kB] Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,184 kB] Get:21 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ Packages [61.8 kB] Get:22 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic/main amd64 Packages [40.9 kB] Get:23 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [41.6 kB] Get:24 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,185 kB] Ign:26 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages Get:26 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [800 kB] Get:27 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [450 kB] Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,414 kB] Fetched 13.3 MB in 3s (5,312 kB/s) Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done cuda-10-0 is already the newest version (10.0.130-1). 0 upgraded, 0 newly installed, 0 to remove and 50 not upgraded. total 80 drwxr-xr-x 1 root root 4096 Jun 10 17:39 bin/ lrwxrwxrwx 1 root root 20 Jun 10 17:39 cuda -> /usr/local/cuda-10.0/ drwxr-xr-x 16 root root 4096 Jun 1 13:27 cuda-10.0/ drwxr-xr-x 15 root root 4096 Jun 1 13:30 cuda-10.1/ drwxr-xr-x 1 root root 4096 Jun 1 13:32 cuda-11.0/ drwxr-xr-x 1 root root 4096 Jun 1 13:42 etc/ drwxr-xr-x 2 root root 4096 Sep 21 2020 games/ drwxr-xr-x 2 root root 4096 Jun 1 13:54 _gcs_config_ops.so/ drwxr-xr-x 1 root root 4096 Jun 1 14:02 include/ drwxr-xr-x 1 root root 4096 Jun 1 14:02 lib/ -rw-r--r-- 1 root root 1636 Jun 1 13:56 LICENSE.txt drwxr-xr-x 3 root root 4096 Jun 1 13:53 licensing/ lrwxrwxrwx 1 root root 9 Sep 21 2020 man -> share/man/ drwxr-xr-x 2 root root 4096 Sep 21 2020 sbin/ -rw-r--r-- 1 root root 7291 Jun 1 13:56 setup.cfg drwxr-xr-x 1 root root 4096 Jun 1 13:53 share/ drwxr-xr-x 2 root root 4096 Sep 21 2020 src/ drwxr-xr-x 2 root root 4096 Jun 1 14:03 xgboost/ Found existing installation: tensorflow 1.15.4 Uninstalling tensorflow-1.15.4: Successfully uninstalled tensorflow-1.15.4 Collecting tensorflow-gpu==1.15.2 Downloading tensorflow_gpu-1.15.2-cp37-cp37m-manylinux2010_x86_64.whl (410.9 MB) |████████████████████████████████| 410.9 MB 25 kB/s Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.1.0) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.38.0) Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.16.0) Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.12.0) Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.12.1) Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.15.0) Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.36.2) Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (3.17.3) Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.2.0) Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.2.2) Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.8.1) Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.20.3) Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.1.2) Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.0.8) Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (3.3.0) Requirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.15.1) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (2.0.1) Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (57.0.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (3.3.4) Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu==1.15.2) (3.2.1) Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (4.5.0) Requirement already satisfied: cached-property; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from h5py->keras-applications>=1.0.8->tensorflow-gpu==1.15.2) (1.5.2) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (3.4.1) Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (3.10.0.0) Installing collected packages: tensorflow-gpu Successfully installed tensorflow-gpu-1.15.2
Importing Libraries 💻¶
# Importing Libraries
import pandas as pd
import re
from ast import literal_eval
import os
import librosa
# To make things more beautiful!
from rich.console import Console
from rich.table import Table
from rich import pretty
pretty.install()
from IPython.display import Audio
DATA_FOLDER = "data"
Training phase ⚙️¶
Downloading Dataset¶
Same as previous challenges, we need to download the dataset using AIcrowd CLI
API Key valid Saved API Key successfully!
# Downloading the Dataset
!rm -rf data
!mkdir data
test.zip: 0% 0.00/160M [00:00<?, ?B/s] test.csv: 0% 0.00/159k [00:00<?, ?B/s] train.csv: 0% 0.00/713k [00:00<?, ?B/s] test.csv: 100% 159k/159k [00:00<00:00, 380kB/s] train.csv: 100% 713k/713k [00:00<00:00, 966kB/s] train.zip: 0% 0.00/643M [00:00<?, ?B/s] test.zip: 21% 33.6M/160M [00:02<00:10, 12.6MB/s] val.csv: 100% 69.1k/69.1k [00:00<00:00, 269kB/s] test.zip: 42% 67.1M/160M [00:04<00:05, 15.6MB/s] val.zip: 0% 0.00/63.9M [00:00<?, ?B/s] test.zip: 63% 101M/160M [00:06<00:03, 16.9MB/s] test.zip: 84% 134M/160M [00:08<00:01, 17.6MB/s] test.zip: 100% 160M/160M [00:09<00:00, 17.1MB/s] train.zip: 26% 168M/643M [00:09<00:26, 17.8MB/s] val.zip: 53% 33.6M/63.9M [00:06<00:06, 5.03MB/s] train.zip: 31% 201M/643M [00:11<00:24, 18.3MB/s] train.zip: 37% 235M/643M [00:13<00:21, 18.7MB/s] train.zip: 42% 268M/643M [00:14<00:19, 19.0MB/s] val.zip: 100% 63.9M/63.9M [00:12<00:00, 5.22MB/s] train.zip: 47% 302M/643M [00:16<00:17, 19.1MB/s] train.zip: 52% 336M/643M [00:18<00:16, 19.0MB/s] train.zip: 57% 369M/643M [00:20<00:14, 19.2MB/s] train.zip: 63% 403M/643M [00:21<00:12, 19.3MB/s] train.zip: 68% 436M/643M [00:23<00:11, 18.6MB/s] train.zip: 73% 470M/643M [00:26<00:09, 17.5MB/s] train.zip: 78% 503M/643M [00:27<00:07, 17.6MB/s] train.zip: 83% 537M/643M [00:29<00:06, 17.6MB/s] train.zip: 89% 570M/643M [00:31<00:04, 17.8MB/s] train.zip: 94% 604M/643M [00:33<00:02, 17.8MB/s] train.zip: 99% 638M/643M [00:35<00:00, 17.8MB/s] train.zip: 100% 643M/643M [00:35<00:00, 18.0MB/s]
Unzipping Files¶
# Unzipping the zip files into the respective set folders
!unzip /content/data/train.zip -d /content/data/train >/dev/null
!unzip /content/data/val.zip -d /content/data/val >/dev/null
!unzip /content/data/test.zip -d /content/data/test >/dev/null
Reading the Dataset¶
train_df = pd.read_csv(os.path.join(DATA_FOLDER, "train.csv"))
val_df = pd.read_csv(os.path.join(DATA_FOLDER, "val.csv"))
test_df = pd.read_csv(os.path.join(DATA_FOLDER, "test.csv"))
train_df
SoundID | label | |
---|---|---|
0 | 0 | efficient spatialtemporal context modeling for |
1 | 1 | on the space |
2 | 2 | baryogenesis through mixing |
3 | 3 | noncommutative gravity in three dimensions |
4 | 4 | effective thermal diffusivity in |
... | ... | ... |
19995 | 19995 | dixmier trace for |
19996 | 19996 | removahedral congruences versus permutree cong... |
19997 | 19997 | viscous control of minimum |
19998 | 19998 | new boundary harnack inequalities with |
19999 | 19999 | a dynamic systems |
20000 rows × 2 columns
Preprocessing the Dataset¶
In this section, we are going to add some necessary columns whcich DeepSpeech
will need while model training
# Preprocessing Dataset Function
def preprocess_data(df, set_name):
# Adding the Wav filepath
df['wav_filename'] = df['SoundID'].apply(lambda x : os.path.join("/content", "data", set_name+"/" +str(x) + ".wav"))
df['transcript'] = df['label']
# Addding the wav file size ( in bytes ), due to mos of the files are around 30,000 bytes, there is not much need put that
# But you can do it you want :)
df['wav_filesize'] = 30000
return df
# Preprocessing all three sets
train_df = preprocess_data(train_df, "train")
val_df = preprocess_data(val_df, "val")
test_df = preprocess_data(test_df, "test")
val_df
SoundID | label | wav_filename | transcript | wav_filesize | |
---|---|---|---|---|---|
0 | 0 | injectivity in higher order | /content/data/val/0.wav | injectivity in higher order | 30000 |
1 | 1 | minimal constraints in the parity | /content/data/val/1.wav | minimal constraints in the parity | 30000 |
2 | 2 | learning to refer | /content/data/val/2.wav | learning to refer | 30000 |
3 | 3 | on the expressive power | /content/data/val/3.wav | on the expressive power | 30000 |
4 | 4 | small parts in the bernoulli | /content/data/val/4.wav | small parts in the bernoulli | 30000 |
... | ... | ... | ... | ... | ... |
1995 | 1995 | responses of small quantum systems | /content/data/val/1995.wav | responses of small quantum systems | 30000 |
1996 | 1996 | thermal rectification in quantum | /content/data/val/1996.wav | thermal rectification in quantum | 30000 |
1997 | 1997 | decomposition and unitarity in quantum | /content/data/val/1997.wav | decomposition and unitarity in quantum | 30000 |
1998 | 1998 | on gravitational collapse in | /content/data/val/1998.wav | on gravitational collapse in | 30000 |
1999 | 1999 | cogrowth and spectral gap | /content/data/val/1999.wav | cogrowth and spectral gap | 30000 |
2000 rows × 5 columns
Sound¶
Listening to some sounds with with respctive labels
# Getting a sample from the dataset
example = train_df.iloc[10, :]
# Reading the sound using the path
sound, sample_rate = librosa.load(example['wav_filename'])
("Sound : ", sound), ("Label : ", sample_rate)
( ('Sound : ', array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)), ('Label : ', 22050) )
The sound
is a 1D list with each value is the amplitude of the sound. And the sample_rate
is show many of the sound
array elements are going through the speaker in each second.
Note : Lower Your PC Volume :)
Audio(example['wav_filename'])
example['transcript']
'cold bosons in optical lattices'
# Saving the preprocessing dataset
train_df.to_csv("deepspeech_train.csv", index=False)
val_df.to_csv("deepspeech_val.csv", index=False)
test_df.to_csv("deepspeech_test.csv", index=False)
Training the model + Validation + Testing¶
Now, using Deep Speech command line, we are going to put the path of dataset with various other parameters to train & validation every epoch, but test after all epochs are done!
%cd DeepSpeech
# We are going to use validation data instead of training because training will take a lot more time
# Putting the data files
# Setting up Model parameters
# Setting up the batch size and audo sample rate
# Using mixed precision so that the model will train faster
# Saving the test predictions
!python DeepSpeech.py --train_files ../deepspeech_train.csv --dev_files ../deepspeech_val.csv --test_files ../deepspeech_test.csv \
--n_hidden 1048 \
--audio_sample_rate 8000 --train_batch_size 32 --dev_batch_size 32 --test_batch_size 32 \
--automatic_mixed_precision True --epochs 3 \
--test_output_file ../assets/output.txt
%cd ..
/content/DeepSpeech I0610 17:45:23.446907 139905088169856 utils.py:157] NumExpr defaulting to 2 threads. I Enabling automatic mixed precision training. I Could not find best validating checkpoint. I Could not find most recent checkpoint. I Initializing all variables. I STARTING Optimization Epoch 0 | Training | Elapsed Time: 0:06:03 | Steps: 625 | Loss: 65.127078 Epoch 0 | Validation | Elapsed Time: 0:00:12 | Steps: 63 | Loss: 36.385970 | Dataset: ../deepspeech_val.csv I Saved new best validating model with loss 36.385970 to: /root/.local/share/deepspeech/checkpoints/best_dev-625 -------------------------------------------------------------------------------- Epoch 1 | Training | Elapsed Time: 0:05:58 | Steps: 625 | Loss: 28.740940 Epoch 1 | Validation | Elapsed Time: 0:00:12 | Steps: 63 | Loss: 23.245962 | Dataset: ../deepspeech_val.csv I Saved new best validating model with loss 23.245962 to: /root/.local/share/deepspeech/checkpoints/best_dev-1250 -------------------------------------------------------------------------------- Epoch 2 | Training | Elapsed Time: 0:05:58 | Steps: 625 | Loss: 19.620218 Epoch 2 | Validation | Elapsed Time: 0:00:12 | Steps: 63 | Loss: 18.581973 | Dataset: ../deepspeech_val.csv I Saved new best validating model with loss 18.581973 to: /root/.local/share/deepspeech/checkpoints/best_dev-1875 -------------------------------------------------------------------------------- I FINISHED optimization in 0:18:40.325792 I Loading best validating checkpoint from /root/.local/share/deepspeech/checkpoints/best_dev-1875 I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel I Loading variable from checkpoint: global_step I Loading variable from checkpoint: layer_1/bias I Loading variable from checkpoint: layer_1/weights I Loading variable from checkpoint: layer_2/bias I Loading variable from checkpoint: layer_2/weights I Loading variable from checkpoint: layer_3/bias I Loading variable from checkpoint: layer_3/weights I Loading variable from checkpoint: layer_5/bias I Loading variable from checkpoint: layer_5/weights I Loading variable from checkpoint: layer_6/bias I Loading variable from checkpoint: layer_6/weights Testing model on ../deepspeech_test.csv Test epoch | Steps: 103 | Elapsed Time: 0:47:20
Getting the Predictions¶
In the previous command, we saved the testing results as outputs.txt
in assets folder. Let's read the file and convert the outputs into the .csv
format.
# Reading the output.txt file
data = open(os.path.join("assets", "output.txt"))
output = data.read()
# Convert the text into python list
output = literal_eval(output)
# Getting the sound and respective label for submission
SoundID = [int(sample['wav_filename'].split("/")[-1].split(".")[0]) for sample in output]
label = [sample['res'] for sample in output]
print(SoundID[0], label[0])
3330 minimum degrecondisions for monacromatic
test_df['SoundID'] = SoundID
test_df['label'] = label
test_df
SoundID | label | wav_filename | transcript | wav_filesize | |
---|---|---|---|---|---|
822 | 3330 | minimum degrecondisions for monacromatic | /content/data/test/822.wav | kevin shields | 30000 |
1919 | 4537 | the nature of | /content/data/test/1919.wav | anna morales | 30000 |
3890 | 1333 | cygochycialogical correlations with | /content/data/test/3890.wav | lisa davis | 30000 |
2922 | 130 | regularization vydescretization in anas | /content/data/test/2922.wav | joseph garcia | 30000 |
4322 | 4525 | a refinement of | /content/data/test/4322.wav | ashley conway | 30000 |
... | ... | ... | ... | ... | ... |
3877 | 225 | pesymasm about un none un nons ins birs | /content/data/test/3877.wav | micheal figueroa | 30000 |
1110 | 1138 | opertgers ap solient concunis y c in tervolation | /content/data/test/1110.wav | kevin nicholson | 30000 |
795 | 1752 | hig prbolic masures oon intimite ymens in of | /content/data/test/795.wav | ryan schmitt | 30000 |
3537 | 232 | erbect wo e in er ratir ize o | /content/data/test/3537.wav | debra cabrera | 30000 |
3738 | 1038 | exporig the di al gauge im he tree brageng | /content/data/test/3738.wav | beth chaney | 30000 |
5000 rows × 5 columns
# It is recommended to sort your columns before making the submission
test_df = test_df.sort_values("SoundID")
test_df
SoundID | label | wav_filename | transcript | wav_filesize | |
---|---|---|---|---|---|
2150 | 0 | eeranalysis for probabilities | /content/data/test/2150.wav | todd williams | 30000 |
4100 | 1 | saely cicompltion for universal | /content/data/test/4100.wav | taylor ellis | 30000 |
3306 | 2 | ficed points of | /content/data/test/3306.wav | justin francis | 30000 |
3860 | 3 | geomotry of legrang gean grahen nin | /content/data/test/3860.wav | james washington | 30000 |
1720 | 4 | creation and danashion of | /content/data/test/1720.wav | gail floyd | 30000 |
... | ... | ... | ... | ... | ... |
1369 | 4995 | plame waves with leak singularities | /content/data/test/1369.wav | nichole bonilla | 30000 |
985 | 4996 | intractive emsc us a sepuale | /content/data/test/985.wav | amber franklin | 30000 |
2615 | 4997 | luclea inticescie of tarsmi | /content/data/test/2615.wav | gregory martin | 30000 |
1163 | 4998 | search for havy | /content/data/test/1163.wav | dr patrick johnson | 30000 |
732 | 4999 | esem nigs boson searches in | /content/data/test/732.wav | erika hurst | 30000 |
5000 rows × 5 columns
Note : Please make sure that there should be filename submission.csv
in assets
folder before submitting it
# Saving the sample submission in assets directory
test_df.to_csv(os.path.join("assets", "submission.csv"), index=False)
Submit to AIcrowd 🚀¶
Note : Please save the notebook before submitting it (Ctrl + S)
Using notebook: /content/drive/MyDrive/Colab Notebooks/Copy of Deep speech for submission... Removing existing files from submission directory... Scrubbing API keys from the notebook... Collecting notebook... submission.zip ━━━━━━━━━━━━━━━━━━━━ 100.0% • 474.5/472.8 KB • 1.1 MB/s • 0:00:00 ╭─────────────────────────╮ │ Successfully submitted! │ ╰─────────────────────────╯ Important links ┌──────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ This submission │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/soundprediction/submissions/144266 │ │ │ │ │ All submissions │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/soundprediction/submissions?my_submissions=true │ │ │ │ │ Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/soundprediction/leaderboards │ │ │ │ │ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-9 │ │ │ │ │ Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/soundprediction │ └──────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Congratulations 🎉 you did it, but there still a lot of improvement that can be made, Changing Hyperparameters seems the first option to start with, have fun!
And btw -
Don't be shy to ask question related to any errors you are getting or doubts in any part of this notebook in discussion forum or in AIcrowd Discord sever, AIcrew will be happy to help you :)
Also, wanna give us your valuable feedback for next blitz or wanna work with us creating blitz challanges ? Let us know!
Content
Comments
You must login before you can post a comment.