diff --git a/README.md b/README.md
index 6adaf81328b0b09ab94d328b21398167a158f667..4bc86e1c62abc2c6a461d4aac589d72926938b1d 100644
--- a/README.md
+++ b/README.md
@@ -14,7 +14,18 @@ All other python modules can be installed directly through PIP, see next section
 
 ## Install
 
-The simplest way to install Autokara is through PIP :
+### Linux
+
+Using a virtual environment is strongly recommended (but not mandatory if you know what you're doing) :
+```bash
+$ python -m venv env     # create the virtual environment, do it once
+$ source env/bin/activate # use the virtual environement
+
+# To exit the virtual environment
+$ deactivate              
+```
+
+The simplest way to install Autokara is through PIP.
 ```bash
 # Using HTTPS
 $ pip install git+https://git.iiens.net/bakaclub/autokara.git
@@ -32,17 +43,6 @@ $ autokara-gen-lang
 ```
 
 
-If you plan on contributing to development, the use of a virtual environment is recommended :
-```bash
-$ python -m venv env     # create the virtual environment, do it once
-$ source env/bin/activate # use the virtual environement
-$ pip install git+ssh://git@git.iiens.net:bakaclub/autokara.git # install autokara
-
-# To exit the virtual environment
-$ deactivate              
-```
-
-Having a CUDA-capable GPU is optional, but can greatly reduce processing time in some situations.
 
 ## Configuration
 
@@ -50,7 +50,7 @@ Autokara comes with a default config file in `autokara/default.conf`.
 
 If you want to tweak some values (enable CUDA, for example), you should add them to a new config file in your personal config directory : `~/.config/autokara/autokara.conf`.
 
-This new file has priority over the default one, which is used only as fallback for unspecified values.
+This new file has priority over the default one, which is used only as fallback.
 
 
 # Use
@@ -80,10 +80,14 @@ To use a phonetic transcription optimized for a specific language, use `--lang`
 ```bash
 $ autokara vocals.wav output.ass --lang jp
 ```
-Available languages are :
+
+Available languages options are :
 ```
 jp : Japanese Romaji (default)
 en : English
+fr : French
+fi : Finnish
+da : Danish
 ```
 
 Full help for all options is available with :
@@ -121,7 +125,7 @@ $ autokara-plot vocals.wav lyrics.ass
 ```
 
 
-# Documentation and useful links
+# Documentation and References
 
 This section is mainly intended for people who would like to contribute and/or are curious about how this stuff works
 
@@ -131,25 +135,13 @@ This section is mainly intended for people who would like to contribute and/or a
 
 ## Syllable segmentation
 
-### Symbolic methods
-
- - [Syllable segmentation](https://www.sciencedirect.com/science/article/pii/S1877050916319068/pdf?md5=abc426e84a71cd4f5c0e6bef9713643e&pid=1-s2.0-S1877050916319068-main.pdf&_valck=1)
- - [Syllable segmentation and recognition](https://cdn.intechopen.com/pdfs/15947/InTech-Syllable_based_speech_recognition.pdf)
- - [Onset detection with librosa](https://librosa.org/doc/latest/onset.html)
-
-### Machine Learning & Deep Learning methods
+[Aligning lyrics to song](https://github.com/jhuang448/LyricsAlignment-MTL) (Jiawen Huang, Emmanouil Benetos, Sebastian Ewert, 2022)
 
 [Using CNNs on spectrogram images](https://www.ofai.at/~jan.schlueter/pubs/2014_icassp.pdf) (Schlüter, Böck, 2014) :
  - [MADMOM implementation](https://madmom.readthedocs.io/en/v0.16/modules/features/onsets.html)
 
-[Aligning lyrics to song](https://github.com/jhuang448/LyricsAlignment-MTL) (Jiawen Huang, Emmanouil Benetos, Sebastian Ewert, 2022)
-
-### Other methods
 
-Other stuff goes here
 
-## Syllable recognition
 
-If we ever want to use an AI to identify syllables without a reference lyrics file.