Quoting TheOneKEA at http://www.linuxquestions.org/questions/linux-general-1/keeping-a-process-running-after-disconnect-150235/:
nohup is what you want - it's a wrapper that blocks the SIGHUP signal sent to all applications connected to a terminal when that terminal is closed by the shell.
Just ssh into the box and start the command using this syntax:
[user@remoteboxen user]$ nohup /path/to/command arguments &
The man page explains it better.
Tuesday, October 13, 2009
Friday, September 11, 2009
Rebuilding the VirtualBox Kernel Modules (Ubuntu 9.04)
Any time there is a kernel update, you would do well to rebuild the VirtualBox kernel module to ensure compatibility with your new kernel version. This can be done by executing the following command from the terminal:
sudo /etc/init.d/vboxdrv setup
Thursday, September 10, 2009
Installing Fonts in Linux (Ubuntu 9.04)
First, you can find some good free font downloads at http://www.sostars.com. I downloaded a stencil font called "Ver Army." I unzipped the file, and found a .ttf font file.
I learned how it install it from this page. Here's a summary:
To install Microsoft Windows fonts:
To install Red Hat Liberation fonts:
To install any other kind of font (including the one I downloaded from sostars.com):
I learned how it install it from this page. Here's a summary:
To install Microsoft Windows fonts:
sudo apt-get install ttf-mscorefonts-installer
To install Red Hat Liberation fonts:
sudo apt-get install ttf-liberation
To install any other kind of font (including the one I downloaded from sostars.com):
mkdir ~/.fonts
(make a font directory in your home directory if one doesn't exist already)mv ver-army.ttf ~/.fonts
(move your ttf file into the .fonts folder)- Restart the computer
Monday, August 17, 2009
GNU sed (Stream EDitor)
sed -r 's/\t+/,/g'
sed | invoke the stream editor |
-r | use extended regular expressions (similar to using the -E argument for grep). This gives meaning to the '+' character in my regex. |
s | tells sed that we are doing a replacement ("substitution") operation |
\t+ | find occurrences of one or more tab characters |
, | replace it with a comma |
g | do this substitution for all occurrences of \t+ |
So, today I had a problem. A friend needed me to convert a 10 MB data file from tab-separated format to comma-separated format.
"This should take about 2 seconds."
I wasn't on my trusty little laptop (running Ubuntu 9.04 Jaunty Jackalope since March) and was stuck using a lab computer on campus, which was, of course, running Windows XP with no useful utilities whatsoever. To try to save some time, I tried to do this conversion right on my friend's computer. We opened the document in MS Word, and tried to do a Find and Replace for tabs, converting them to commas.
Slow. Killed the program several minutes into the operation.
Next, over to my trusty laptop. Loaded up jEdit, a handy programming editor that has done well for me in the past. Tried to do the find and replace.
Also slow. Killed this about 10 minutes into the operation. "It really shouldn't be taking this long." What went wrong? JEdit was out of memory. I found that out from the command-line terminal where I launched jEdit. Hmmm... Maybe some kind of error box would have been nice so I didn't just sit there for 10 minutes wondering. ;)
No more of this garbage. We're going to the command line.
Always go to the command line.
I already knew about
sed
, but my memory was a little rusty on the command-line arguments. After about 10 minutes, I finally found what I was looking for.Converted the file in about 2 seconds.
Why is it that something that should take 2 seconds always takes 30 minutes?
Monday, April 13, 2009
Shell script for Google search result parsing
This is the shell script I wrote to help me perform the analysis I did for Quest 5.
1. Perform a site:yoursite.edu search in Google, displaying 100 results per page.
2. Save each page (Google will only give you 10 at most) into a folder named yoursite.edu
3. Download the shell script to the directory that contains the yoursite.edu directory.
4. At the command prompt, type:
5. OR, if you named the yoursite.edu directory something different, run this:
6. It will create a "savedresultsdirectory-parsed" directory, which will contain a "domainlist" file and a "pagelinks" directory. The "domainlist" gives the subdomain breakdown of the search results. The "pagelinks" folder contains files for each subdomain that include all of the search result URLs for that subdomain.
Download the file here.
1. Perform a site:yoursite.edu search in Google, displaying 100 results per page.
2. Save each page (Google will only give you 10 at most) into a folder named yoursite.edu
3. Download the shell script to the directory that contains the yoursite.edu directory.
4. At the command prompt, type:
./google-results-parse yoursite.edu
5. OR, if you named the yoursite.edu directory something different, run this:
./google-results-parse yoursite.edu savedresultsdirectory
6. It will create a "savedresultsdirectory-parsed" directory, which will contain a "domainlist" file and a "pagelinks" directory. The "domainlist" gives the subdomain breakdown of the search results. The "pagelinks" folder contains files for each subdomain that include all of the search result URLs for that subdomain.
Download the file here.
Open Ed. Quest 5 -- Searching for a Better Way (to Search)
Quest 5
"Many BYU faculty already openly share their syllabi and other course materials on personal websites, through iTunesU, and through other mechanisms ... Find as many of the open educational resources being shared by BYU faculty as you can..."
It seems to me that discoverability is really going to be the ultimate make-or-break hinge issue for OER. One could produce world class, high quality OER that trumps everything that any institutional OER effort produces, and yet remain in complete obscurity with no hope of ever actually sharing these wonderful OER with anyone at all. And after all, if you take the time and trouble to make some kind of resource with openness in mind, it seems silly to have it be completely worthless (or at least, gravely underused) in the end because you weren't able to put it somewhere that people would find it.
This post isn't going to discuss the hows and whys of publishing open educational content for maximum discoverability. We'll save that for another time. However, Quest 5 gives us the specific assignment to comb over BYU's web presence looking for faculty-produced OER content, and it begs the question, "How would one go about finding all of the OER on a university's web space?"
The task is not trivial.
Thursday, April 2, 2009
Copyright in Distance Education
(It is at this time that I would like to make a plug for Creative Commons licenses. Thank you.)
I think I've talked more about copyright this semester than at any other time in my entire life. This is not surprising, however, as I would guess that I am like most people in many respects, and I am assuming that most people aren't well versed in the subtle nuances and intricacies of US copyright law, including the Digital Millenium Copyright Act (DMCA) and the Technology, Education, and Copyright Harmonization Act (TEACH).
What a mouthful.
I think I've talked more about copyright this semester than at any other time in my entire life. This is not surprising, however, as I would guess that I am like most people in many respects, and I am assuming that most people aren't well versed in the subtle nuances and intricacies of US copyright law, including the Digital Millenium Copyright Act (DMCA) and the Technology, Education, and Copyright Harmonization Act (TEACH).
What a mouthful.
Subscribe to:
Posts (Atom)