15 Mar 2016
IIS is a great web server from good guys at Microsoft. Since it’s Microsoft technology it is natively supported in all the version of Microsoft windows. We’re gonna use IIS as a web server for our home computer and use it to transfer files to other devices like iPad.
Requirements:
- You must have Windows computers
- All the devices mus be connected to the same WiFi
Steps:
- IIS Activation
- IIS Configuration
First Step: Activation
First (on some devices) you need to activate IIS in the Control Panel:
Go to Control Panel --> All Control Panel Items --> Programs and Features
or simply copy the below address in your address bar:
Control Panel\All Control Panel Items\Programs and Features
Now Click on Turn Windows Features On or Off
and then check Internet Information Services
.
Now you have our IIS set-up and running all the files in c:\inetpub
will be published in localhost (supposing your Windows is installed in Drive C)
Go to the address localhost
in your browser and voila!
Second Step: Configuration
Now go to IIS configuration it is located in Control Panel\All Control Panel Items\Administrative Tools
open Internet Information Services (IIS) Manager
and go to Directory Browsing
Now, All you have to do is to know you computer’s IP address in your local network simply go to CMD (By pressing Win key + R
now type CMD
. In the Command Prompt window type Ip config
. Copy your IPv4 address.
Now go to your inetpub folder and copy anything you want to transfer to you devices. enter the IP Address you had copied previously in your iPad, iPhone etc. and hit go.
06 Mar 2016
I have just launched a new website named ‘Um…’ It’s basically an excuse generator for the times you are late and you are not going to arrive on time! You can add new excues by editing the js file and requesting a pull.
23 Feb 2016
A few days ago I watched a lecture by Dr. Mohammadreza Bateni titled: language planning. In this video he proposed that we change the writing system of Persian from Arabic to Latin.
Image from: freelanguage.org
There were several issues that came to my mind regarding this proposal that I would like to mention:
First of all Persian is a rich language in term of vocabulary. Most of Persian vocabularies come from Arabic roots and as you might know Arabic language has several Bons with which you can make up new words. Like… A Persian speaker would never understand a concept of bon and Persian word formation without knowing both Arabic grammar and Arabic script. To give more details the necessary tool is Arabic script and the sufficient knowledge is the knowledge of Arabic bon. Without knowing the Arabic written form we cannot say that Baten and Batn are from the same base.
The Arabic RTL writing system and alphabet does not effect incorporating our language in computers and especially in web.
Take Hebrew for example a perfect instance of right to left script.
Another claim that he made in the lecture was that ot would make writing Persian easier. One thing for sure is that having 4 /z/ and 3 /s/ correspondents in our writing system does not make it difficult. In fact its making our language easier! (Although learning Persian writing and spelling is rather hard specially for foreigners and L2 learners, but after getting used to it most learners will understand our vocabulary system)
In fact one of the superiorities of Persian language is that, unlike Arabic, it does not need Diacritic (In French: Diacritique)
In the end the Persian language will benefit a little (close to none) in changing the writing system and it will ruin our language and our vocabulary vastly.
PS. You can watch the lecture (in Persian) below:
22 Feb 2016
In the previous post I tried to introduce a function of duration and frequency. after some thoughts I decided to work on two fundamental questions:
a) The Aim
b) The process
The Aim of this scale and the whole transcription is to provide a hybrid model of intonation transcription which is suitable for both computational processing and human transcription (I will soon do a detailed analysis of existing intonational models),
The process is rather tricky. In the beginning I would really wanted to work on the ToBI system. It’s fast, it’s easy to read
and finally it’s easy to implement. The thing about ToBI is that it’s not quite…you know…right. what I mean is that the ToBI system throws off a tremendous amount of prosodic data that we can’t quite say we don’t need.
The Time Frequency Function
or in short TFF tries to compensate the duration
loss and
frequency
loss in ToBI. As I explained in the previous post we can use the mentioned formula to calculate
how much frequency change did we have?
This solely can help us on our aim in collecting pitch data. consider the following example:
Actually I just realized that my audio was stereo and it was too late for me to downmix it to mono, apologies in advance
In the pitch track below which belongs to the audio above I have transcribed the prosodic events in the following order: r for Raising pitch and f for falling pitch. The scaled number after the raising and falling mark is the amount in which the frequency has changed over the time period from 1.00 to 4.00. In the example below the transcription number 6 is scaled as a
2.02
which indicates that this prosodic event is less important than number 2 and so on.
The second example is another Kurdish utterance transcribed:
The last example is a comparison between ToBI and the proposed model:
audio and transcription retrieved from: K-ToBI (Korean ToBI) Labelling Conventions, Ex. 1, (version 3.1, in November 2000)
Sun-Ah Jun
Dept. of Linguistics, UCLA
http://www.linguistics.ucla.edu/people/jun/ktobi/k-tobi.html
21 Feb 2016
This tutorial page is intended for linguists who are either doing field work or recording voice data for phonological/phonetics analysis; however it can be used in other disciplines and other projects.
First of all this tutorial is only personal experiences of the author and may not apply to any other situation or other people’s experience. Secondly you can contact me if anything in this tutorial is either not correct, or did not work for you.
Fig 1 spectral frequency display of my name, Image courtesy of: Adel Rahimi
Okay, first things first: know what you are doing. Always think forward. I have seen a lot of people who said we’ll record and analyze, it’s an easy task but NO! It is not. Recording data is a tremendously hard task. You cannot predict what is going to happen. And sometimes you cannot even control it.
Recording: Always try to record in a studio otherwise you will get a lot of noise and it will make your job on noise reduction and analysis hard. If a studio is not available try to minimize background noise e.g. going to a quiet place with few people around (even slightest movements like caressing your hand on a table or even cloth will be audible) especially if you are recording indoors close all the windows and doors and move to a quiet room (preferably big enough in order not to have reverb while recording)
You have two options here a) recording straightly to your computer or b) recording data in a Voice Recorder.
If you choose the first option i.e. recording data directly on your computer you’ll have to have a microphone. There’s no way you can record a good quality voice via pre-equipped microphone on your computer
Fig 2 3.5 mm audio input (microphone input) right, and headphone jack on the left
A simple 3.5 mm microphone would suffice however mostly you will get some noise by using 3.5 mm jacks.
Fig 3. 3.5 mm jack, image courtesy of bsrsoft.com
XLR is far more professional than 3.5 mm however you need to spend more on your equipment. Most computers don’t have XLR input so you need to spend a few hundred bucks on sound card. And also you have to buy a microphone with XLR output which will probably cost you a little more but looking at the bright side you will get a great sound quality I mean studio quality! A new line of USB microphones has just emerged which they have Sound Card equipped within themselves and they have USB output which is great for low budget recording.
Fig 4. XLR connector, image by Michael Piotrowski
There are a lot of Recording Softwares which almost all of them provide the same quality; however more professional softwares give you freedom on exporting uncompressed (i.e. loosing less data). We will be discussing two major softwares here that I personally recommend.
Audacity: Audacity’s UI is really simple you have your recording button on top, your recording volume you can click on the level sound to monitor your recording and then you can start recording. By the way Audacity is completely free!
Fig 5. Screenshot of Audacity, Courtesy of Adel Rahimi
Adobe Audition: Adobe Audition gives better control however most controls are available instantly, you HAVE to know the hotkeys, unless you’ll get lost in the menus.
Fig 6. Screenshot of Adobe Audition CC, by Adel Rahimi
Options for recording data in Voice recorders is more versatile. You can easily record through Voice Recorders built-in microphone which is enough for field working. Most Voice Recorders have XLR input as well as 3.5 mm jack the professional ones even have 2-4 XLR input (like Zoom H6 which has 4 channels that you can record audios)
Fig 7. Zoom h6 recorder, image bhphotovideo.com
Editing: There are numerous softwares for editing sounds. I personally use adobe Audition for editing my data. You cut delete and monitor easily without affecting the original file thanks to a feature called: non-destructive editing. You can see the pitch display by clicking on the button as shown in fig. 8
Fig 8. Click the blue button to show the pitch display
Noise Reduction: Noise Reduction is crucial part in editing. It can be either constructive or destructive. For language documentation only where you don’t directly analyze sound waves for example documenting language grammar otherwise you have to know what you are doing unless DON’T.
I will be doing a whole series of videos mainly focused on Noise Reduction soon.
Further reading:
- Kinsler, L.E., Frey, A.R., Coppens, A.B. and Sanders, J.V., 1999. Fundamentals of acoustics. Fundamentals of Acoustics, 4th Edition, by Lawrence E. Kinsler, Austin R. Frey, Alan B. Coppens, James V. Sanders, pp. 560. ISBN 0-471-84789-5. Wiley-VCH, December 1999., p.560.