NSF Geoscience Education 2009-12-17

TO: Principal Investigators, Department Chairs, Deans and Directors

FROM: Joshua L. Rosenbloom, Associate Vice Provost for Research and Graduate Studies (RGS)

RE: NSF Geoscience Education (GeoEd) NSF 10-512

Date Posted: December 17, 2009
Deadline for internal competition: January 19, 2010
Agency deadline for Full Proposal: March 8, 2010
Sponsor: National Science Foundation
Limit on number of proposals: An organization may be the lead organization on only one Track 2 proposal submitted per competition.

The GeoEd program invites proposals in four main areas:

-Advancing public Earth system science literacy, particularly through strengthening geoscience education in grads K-14 and informal education settings;

-Fostering development and training of the diverse scientific and technical workforce required for 21st century geoscience careers;

-Utilizing modern technologies to facilitate and increase access to geoscience education and/or develop innovative approaches for using geoscience research activities and data for educational purposes; and,

-Establishing regional networks and alliances that bring together scientists, formal and informal science educators, as well as other stakeholders, in support of improving Earth system science education and broadening participation in the geosciences.

The GeoEd Program accepts proposals for pilot or proof-of-concept projects (Track 1) and integrative collaborations (Track 2), as well as for conferences or workshops related to the mission of the program. The maximum amount that can be requested for a Track 2 proposal is $500K, but the average award size is anticipated to be on the order of $400K. Track 2 projects can have a maximum duration of four years.

URL: http://nsf.gov/pubs/2010/nsf10512/nsf10512.htm

URL for "Limited submission proposals procedure":
https://documents.ku.edu/policies/research/institutionalendorsement.htm

If you plan to submit a pre-proposal, please alert Barbara Earl (864-7781, bearl@ku.edu) one week prior to the internal deadline. This information will assist RGS in organizing the review committee.

The pre-proposal should include (1) a three-page narrative [describing the proposed research, including key personnel and partnerships, addressing any special components requested in the agency's solicitation (Project Description, Page 13; Additional Review Criteria, Page 16), and a generalized yearly budget] and (2) a short vita for the PI and all co-investigators, including funding history and current and pending support. Investigators may include suggestions of faculty members whom they believe are especially well qualified to evaluate pre-proposals for this initiative. Please submit this material electronically to Barbara Earl, manager of Proposal Services, by 5:00 p.m. on January 19, 2010 (bearl@ku.edu, 864-7781).

If the number of pre-proposals exceeds the mandated limit, an internal review committee will be formed to review all pre-proposals. The committee's intent is to select a proposer by February 1, 2010, thereby giving the proposer five weeks to prepare for the proposal submission.

If you have any questions about the solicitation or the process described above, please contact Barbara Earl.

Posted in Uncategorized | Comments Off on NSF Geoscience Education 2009-12-17

LaTeX editing with AucTeX

I don't like AucTeX very much yet, but I'm trying. Here are things I've "found out".

http://emacsworld.blogspot.com/2008/04/cleaning-up-or-deleting-latex.html

AucTeX provides a simple interface to clean the files. Hitting C-c C-c and typing 'Clean' or 'Clean All' deletes the intermediate files associated with the current .tex in the buffer.

That's right, it ONLY deletes the intermediate files with respect to the current buffer where C-c C-c is called.

The difference between 'Clean' and 'Clean All' is that, in the latter case, the output files are also deleted. And the files can also be deleted by calling M-x TeX-clean.

Posted in Uncategorized | Comments Off on LaTeX editing with AucTeX

Advanced Virus Remover Hassle

A lady I help called because her Win system said it was infected and she
needed to send in 49.95. It said she had a pernicious worm, but when I
went and saw it, I was quite impressed by the intrusion.

A thing called "Advanced Virus Remover" was flashing all kinds of
popups, scary warnings. It even re-writes the global user background
with a horrible warning.

http://www.removevirus.org/virus-strains/remove-advanced-virus-remover-258/

SO what, no big deal. Get rid of it.

Easier said than done. The AVR disables Mcafee antivirus. It disables
access to the command prompt, taskmgr, and regedit, telling the user
those programs cannot be run because they are infected.

I found lots of discussion about this on the net, lots of people
offering to give me something to fix it. How to know which are honest,
and which are scams that will dig me in deeper?

I gambled on one that seemed more honest.

Go here:

http://www.patheticcockroach.com/mpam4/index.php?p=31

At the bottom there is "mpam4_taskmgrXP.exe", a task manager you can run
and it defeats the Advanced Virus.

Run that, manually kill the Advanced Virus Remover program (AVR) in the
list, then manually remove the Advanced Virus directory from c:\Program
Files.

After that, your Mcafee will run and quarantine a bunch of files.

I also found another free spyware checker to run.

Malwarebytes Anti Malware

After that all is well.

Posted in Windoze | Comments Off on Advanced Virus Remover Hassle

Cluster Journal

Entry 1. Trials of Dell BIOS & Firmware upgrades

Entry 2. One failed Rocks install https://pj.freefaculty.org/blog/?p=44

Entry 3. Rocks install failed again.

Posted in Uncategorized | Comments Off on Cluster Journal

Cluster Journal Entry 2

Now I'll try the Rocks 5.2 install. I decided to go outside the defaults by re-designing the disk partitions. I know what I want, I really do. But after about 1 hour of installation, it failed because something was out of order with the partitions I created. I don't need/want a big NFS partition, because there's an external device that will be attached for that, and it seems like a big waste to set aside almost the whole hard disk for NFS on the front end and the compute nodes.

Philip warned me not to do this, but I tried it anyway. He was correct. In the manual partition device, I'm not sure if the NFS partition is supposed to mount on /export or /state/partition1. I tried the latter, but there was an error that there was not enough space for /var, but if I let it put that on /export, it says OK. Until the Rocks install process gets to the very very end, it asks for the Torque Roll, and disaster happens. I think either i named the NFS partition incorrectly or I did not make it big enough. So I'll start again with default partitions, but in case you want to see, I've uploaded the error log.

https://pj.freefaculty.org/linux/cluster/rockyFailed-1.txt

Too bad, the next time it fails. But differently 🙂

The default partitioning did not solve everything. After inserting the Roll disks, the Rocks installer begins to build the distribution and an error appears.

Unable to read package metadata. This may be due to a missing repodata directory. Please ensure that your tree has been correctly generated.

https://pj.freefaculty.org/linux/cluster/rockyFailed-2.txt

Well, what could cause this? The only Roll I'm using that is not directly from the Rocks distro is Torque. I need the Torque Roll because this system is intended to go into a MOAB system of clusters. If Torque is the problem, I guess I'll find out by rebuilding without it.

The other possibility is that the Centos disk is corrupt. I just wrote it (and verified it in k3b) but the Rocks install does not prompt to do the disk integrity check. Come to think of it, I have seen this one before. Bad Centos disk -> failed install. But now I've found another PC to boot off the Centos disk and run the disk check. It is.

I suppose there may be something wonky with my Rocks Kernel disk. So I made another one.

Perhaps the third time is the charm. WITHOUT the Torque Roll, but with Centos-5.3_x64, and the rolls from Rocks "base" "ganglia" "hpc" "area51" "web server", Rocks does install.

Posted in Uncategorized | Comments Off on Cluster Journal Entry 2

Cluster Journal Entry 1

I inherited 3 Dell Poweredge 2950, 60 Dell Poweredge 1950, 3 Dell Powervault MD3000, some racks, power unigs, several switches and a few boxes of cables and wires. These are 17 months old, but have never been used. There has been an on-going hassle getting sufficient power for these units. Its almost sickening. I just came into this in the very end, but I've sat through enough meetings to gag a maggot.

But it is worked out now. On November 25, we moved the systems into a server room in the KU Research & Graduate Studies unit, which has been most gracious 🙂

I test-installed Rocks 5.2 on several small test systems (ordinary PCs) in preparation for the big show. I learned that DOES NOT work using Centos 5.4 as the OS, but it does work with either the Rocks-supplied OS Centos 5.2 or the Centos 5.3 distribution disks.

I brought back one 2950, one 1950, and one MD3000 to my office in order to do some testing. Wow, these are loud. Unbelievable, really. I can barely hear Jimmy Hendrix with the volume all the way up.

The first problem for me has been figuring out "what is Dell's problem and what is my problem." The 2950 has a Remote Administration device with a separate ethernet connection. I gather that, if I can make that work, then I'll be able to reboot the system remotely when it hangs. That must be a priority for people who run Windows servers. Frankly, I've never hung a Linux Server in 10 years. Nevertheless, I want to make the most of the hardware. So I used the Dell Service tags to go looking for information on updates. Surely, there are bios and other updates required.

The Dell website is a complicated tangle of update scripts, jargony named things I don't understand, vague advice and weird warnings. I understand the RedHat RPM packaging system very well, but Dell pages are written for people who don't understand much of anything, except that they launch off into jargony technical abuse every other paragraph. Just getting the firmware updates together is a formidable task.

Finally, I think I figured out the minimum necessary elements have been collected into a single DVD.

1. SUU (version 6.1.1) is a collection of Server Updates, bios, firmware and such.

That's not bootable. In order to use that, one needs a boot disk, the name of which Dell seems to change every few months. The current version is called

2. SMTD (the version I downloaded is OM_6.1.0_SMTD_A00.iso)

One of the really confusing things is that the bios & firmware can be updated using the SMTD disk BEFORE the Linux OS is installed, but Dell also provides individualized DUP (Dell Update Packages). I am hoping for the best that, if I miss some firmware updates with the SMTD/SUU disk combination, then I can get them after installing Rocks.

The MD3000 apparently needs a separate firmware & driver update and it can only be installed after the OS is running.

In the SMTD, the options are pretty obvious. I approved all the suggested bios & firmware upgrades. I was a little bothered by the options on installing the OS. They, of course, don't have Rocks or Centos, and so choosing an OS leads to a blind alley in which Dell's SMTD wants to partition my hard disk. I had to back out of that because Rocks does not support LVM (logical volume management), which is what Dell uses by default. I checked with a Dell rep online and he said it is OK because the SMTD is really only needed for MS WIndows installs because that OS needs driver upgrades before installation, but Linux does not.

I wish I could get RedHat 5.3 disks, but I'm ashamed to ask my tech support for them. After RH 5.4 was released, one of our systems malfunctioned because of an automatic RPM update, and I needed the RH5.4 ISO (disk) file "right away". After about 2 weeks, tech support uploaded it for me (a full 6 weeks after RedHat had released it). Now I can get 5.4, but they removed 5.3 from their server. (complain, complain!). I'd need the RedHat Advanced Server disk, and I'm afraid we only get the Enterprise Server, so maybe it is not so bad to ignore them.

Nevertheless, the Rocks system uses Centos as its default distribution, and it may be I buy trouble by using the authentic RedHat. I'm a little worried that Centos is not in the list of "supported distributions" from Dell and so their customized storage drivers won't install. But there is some promising chatter in the Dell community site, and the Dell software repository uses RedHat and Centos interchangeably in at least one spot.

One trouble I have is that the local DHCP server is not configured to give me an IP number on this 2950 system. I'm afraid that is slowing firmware updates. It appears the SMTD/SUU firmware update process tryies to go on the Internet to check for updates. It does that even though in the configuration, I checked the "use disk" option for firmware.

Oh, well. It dies at 5% completion, "firmware deployment failed". Reboot required.

Lets see what happens if I plug in a live ethernet cable into one of the 5 ethernet jacks. I'm guessing which one is the "live" one.

Interesting. After rebooting into the SMTD, the firmware configuration panel shows that some drivers have been updated since the first time I tried this. Hopefully, that means some changes were actually applied the first time, even though the "Dell Systems Build and Upgrade Utility" never moved off 5% completion, indication "Collecting Server Info: Checking For Firmware Updates". Its frustrating because the configurator already did the required checking and told me what I need to do. The un-hopeful interpretation would be that the first try fouled the firmware updates.

On the second try, it stuck again at 5% for a long time: "checking for firware updates." That's the same place it died before. But, what light through yonder window breaks! Screen says "attempting to update BIOS" and then it rebooted.

Its not a very smart update process because the SUU nonbootable driver disk is still in the CD, so when the system restarts it just stalls saying there is no operating system. So I put the SMTD back in & reboot. Then I re-run the Dell Systems Build and Upgrade Utility again and most of the firmware has been updated. Still 2 ethernet cards need firmware. Awesome, here we go again. Wait at 5% done for 10 more minutes. After that, it rebooted and seems OK.

What a hassle. They want me to do this for each and every one of the 63 blades? Cut me some slack.

I'll Continue with another post, Cluster Journal Entry 2 (for originality)

Posted in Linux | Comments Off on Cluster Journal Entry 1

R: recover commands & track methods

http://tolstoy.newcastle.edu.au/R/help/05/09/12506.html

From: Prof Brian Ripley
Date: Thu 22 Sep 2005 - 19:10:08 EST

The original reply was deliberately (I guess) vague. (I've removed the history, as attributions had already been removed, in violation of copyright law. If you cite someone, you MUST credit the author.)

Sometimes a little knowledge is a dangerous thing, and we have had a number of partially true answers.

Spreading confusion between the S4 classes of the 'methods' package and the (sometimes called S3) classes of base R is also dangerous. The R documentation refers to S3 methods and classes unless otherwise stated (and in the methods package documentation). Please follow that lead.

On Thu, 22 Sep 2005, Spencer Graves wrote:

> Is there general documentation on a procedure to follow to:
>
> (a) Find what methods are available for a particular class of
> objects?

?methods, unless you mean an S4 class.

Be careful here: methods `for a particular class' are not all that might be dispatched, as methods for classes the object inherits from may also be used. Thus "lm" methods may be invoked for "glm" objects, and you may need to call methods() for all the classes the object inherits from.

> (b) Find what classes of objects have methods defined for a partilar
> generic function?

?methods, unless you mean S4 classes (and that help page leads you to the right place for those).

> (c) Get the code that's actually used?

getAnywhere() on the asterisked results of (a) or (b).

For a specific generic and a specific class, getS3method().

[There is a potential gap here as the "bar" method for class "foo" need not be called foo.bar(). So guessing the name may not work, but getS3method("foo", "bar") will. AFAIK there are no live examples of this.]

> For example, I recently needed to access numbers associated with an
> object of class "lmer". Sundar suggested I use with 'getMethod("show",
> "summary.lmer")'. However, this doesn't work with the example below.

(I think that was intended to refer to the default method for princomp, which is not an S4 generic in base R.

> methods("princomp")

[1] princomp.default* princomp.formula*

Non-visible functions are asterisked
> getAnywhere("princomp.default") # works
> getS3Method("princomp", "default") # works
> showMethods("princomp")

Function "princomp":

)

show() is an S4 generic, not an S3 generic. ?methods points you to how to explore S4 generics.

> library(lme4)

... (and drink some coffee while you wait)
> methods(show)

no methods were found
Warning message:
function 'show' appears not to be generic in: methods(show)
> showMethods("show")

Function "show":
object = "ANY"
object = "traceable"
object = "ObjectsWithPackage"
object = "MethodDefinition"
object = "MethodWithNext"
object = "genericFunction"
object = "classRepresentation"
object = "ddenseMatrix"
object = "Matrix"
object = "lmer"
object = "summary.lmer"
object = "VarCorr"
object = "sparseMatrix"
object = "lmList"

> selectMethod("show", "summary.lmer")
Method Definition:

function (object) ...

Here getMethod() will also work, but selectMethod() is more likely to find `the code that's actually used'.

========================================
David W pointed me to this more complete discussion in Rnews.

http://www.r-project.org/doc/Rnews/Rnews_2006-4.pdf

R Help Desk
Accessing the Sources R Code Sources
by Uwe Ligges
ISSN 1609-3631
Vol. 6/4, October 2006

He urges readers to consult the source code often!

Some tidbits I don't want to forget:

These sections are direct quotes:

"
Code Hidden in a Namespace

In some cases, a seemingly missing function is called
within another function. Such a function might sim-
ply be hidden in a namespace (Tierney, 2003). Type
getAnywhere("FunctionName") in order to find it.
This function reports which namespace a function
comes from, and one can then look into the sources of
the corresponding package. This is particularly true
for S3 methods such as, for example, plot.factor:
R> plot.factor
Error: object "plot.factor" not found
R> getAnywhere("plot.factor")
A single object matching ’plot.factor’ was found
It was found in the following places
registered S3 method for plot from namespace
graphics
namespace:graphics
with value
### [function code omitted] ###
The file that contains the code of plot.factor is
‘$R HOME/src/library/graphics/R/plot.R’.
S3 and S4

As another example, suppose that we have the ob-
ject lmObj, which results from a call to lm(), and we
would like to find out what happens when the object
is printed. In that case, a new user probably types
R> print
in order to see the code for printing. The frustrating
result of the call is simply:
function (x, ...)
UseMethod("print")

The more experienced user knows a call to
UseMethod() indicates that print() is an S3 generic
and calls the specific method function that is appro-
priate for the object of class class(x). It is possible
to ask for available methods with methods(print).
The function of interest is the S3 method print.lm()
from namespace stats (the generic itself is in the base
package namespace).

A method hidden in a names-
pace can be accessed (and therefore printed) directly
using the ::: operator as in stats:::print.lm.

In order to understand and change S4 related
sources, it is highly advisable to work directly
with a package’s source files. For a quick look,
functions such as
getClass(),
getGeneric(), and
getMethod()
are available. The following example
prints the code of the show() method for mle objects
from the stats4 package:
R> library("stats4")
R> getMethod("show", "mle")

"

Posted in R | Comments Off on R: recover commands & track methods

disk partitioning and copying across machines:

In the past, I've used g4u to copy partitions on network, but the new Dell pcs have some kind of Pentium and ethernet that g4u does not recognize. So this presents a fruitful opportunity to learn.

Copy the mbr from /dev/hda into a file in working directory

dd if=/dev/hda of=backup-hda.mbr count=1 bs=512

Copy the partition setup of the extended partition:

sfdisk -d /dev/hda > backup-hda.sf

To restore the MBR

dd if=backup-hda.mbr of=/dev/hda

Restore the extended partitions:

sfdisk /dev/hda < backup-hda.sf Before compressing a partition, do this to write a 0 over every open spot so the compression algorithm will be most efficient. dd if=/dev/zero of=/TARGETPARTITION/0bits bs=20971520 # bs=20m Can use netcat to copy partitions over network. Mysterious, somewhat slow. In the new netcat, the procedure does not involve dd, so it is not clear how to speed this up with bs settings. http://www.cyberciti.biz/tips/howto-copy-compressed-drive-image-over-network.html On the TARGET machine, 129.237.61.XX, make sure port 12345 is unfirewalled. Then set netcat to grab the incoming on 12345 and divert it to disk /dev/sdb. Can also replace with particular partition /dev/sdb2: nc -l 12345 | bzip2 -d > /dev/sdb

After that, go to the SOURCE machine and compress the source disk /dev/sda and pipe it to through nc over to the other system:

bzip2 -c /dev/sda | nc 129.237.61.XX 12345

That does work, but not speedy.

I've been experimenting with partition image for the same job.

Posted in Linux | Comments Off on disk partitioning and copying across machines:

mime types, update-desktop-database, and Linux Alternatives

What to do if an unexpected program tries to open up a file? You want acrobat reader, but get that horrible kghostviewer.

Understand this:

To find programs, some parts of the desktop framework use the settings given by the "alternatives" framework. Check /etc/alternatives, a big collection of symbolic links.

Here's how you can revise those settings. After I installed seamonkey, I noticed lots of programs like R and Gnome Help were using seamonkey, not firefox. I was stumped a long time. Here's the fix.

$ sudo update-alternatives --config x-www-browser

There are 2 alternatives which provide `x-www-browser'.

Selection Alternative
-----------------------------------------------
1 /usr/bin/firefox-3.0
*+ 2 /usr/bin/seamonkey

Press enter to keep the default[*], or type selection number: 1
Using '/usr/bin/firefox-3.0' to provide 'x-www-browser'.

Other programs do not use alternatives, but rather try to use the xdg-mime framework. A generic command like "xdg-open whatever.pdf" is supposed to open that pdf file in a desired viewer. Use "xdg-mime" functions to view & revise those settings. I've found that system is very tempermental.

Check /usr/share/applications, where programs are supposed to drop "desktop" files. Those are text files with config information. defaults.list is built by update-desktop-database. IF there is a mis-configuration in the preferred app selection through xdg-open, the system.
've tracked down the problem, though, so I can stop kghostview from
being invoked. (more below).

One other thing. The settings can be broken in the user's account. I've found several examples where the .local directory in the user's home account was full of wrong configurations. It kept trying to use gconf-editor to open pdf files. I never found out what the user was doing to make that association re-assert itself, possibly it was some right-clicking in nautilus.

Here was a message I wrote May 5, 2009, that summarized a learning process on the mime type management.

When RPMS install, they often have a post script call to execute
"update-desktop-database" which scans the desktop files under
/usr/share/applications and it builds
/usr/share/applications/mimeinfo.cache.

To my surprise/astonishment, the update-desktop-database function is
completely undocumented, not even a man page or a mention in the
README file from the xdg group that provides it. To my even greater
surprise, there is apparently no way to predict which programs will be
at the front of the list for each mime type. The mimeinfo.cache is
used by programs like nautilus to know what programs are available.
update-desktop-database, for reasons I don't know, places ghostview
first among pdf opener programs.

More and more programs are relying on the "xdg-open" script to select
viewers for files. xdg-open ends up scanning for the desktop framework
in use, in my case Gnome, and then it passes off the pdf file to the
program gnome-open, which is supposed to check for "defaults.list" in
the user's config and in the system at /usr/share/applications. The
mimeinfo.cache file is not supposed to be dominant setting here
because the preferred viewer is supposed to be specified in
defaults.list. We have AdobeReader.desktop specified there.

However, when the config in defaults.list is broken, then xdg-open
(hence gnome-open) don't know what to do, and they consult
mimeinfo.cache and they take the first program listed.

Our config was broken because Adobe was installed incorrectly.
The RPM that installed AdobeReader did not copy AdobeReader.desktop
into /usr/share/applications. As a result, when the user tries to
configure "xdg-open" (via xdg-mime) or use "gnome-open", both try to
use AdobeReader.desktop, and fail, and then they fall back to use the
things in /usr/share/applications/mimeinfo.cache. One can just copy
AdobeReader.desktop from /opt/Adobe/... into /usr/share/applications
to fix this. To be fancier, one can run "xdg-mime install "

To protect my systems from ever using kghostview, I ended up deleting
the Mime line in all of the relevant desktop files in
/usr/share/applicaitons/kde. After that, "update-desktop-database"
can be called and kghostview is never listed as a pdf opener.

Posted in Linux | Comments Off on mime types, update-desktop-database, and Linux Alternatives

deb package tips that are hard to remember:

On http://ubuntuforums.org/showthread.php?t=486897, I found this useful tip:

# Generate a list of dev package names for packages that are installed
# (some of these may already be installed)
# (this would probably take a while to run)
dpkg -l | grep ^ii | awk '{print $2}' | egrep -v '\-dev$' | while read p; do devpkg="$p-dev"; if apt-cache show $devpkg > /dev/null 2>&1; then echo $devpkg; fi; done > packages.txt;

# install the found dev packages
cat packages.txt | while read p; do sudo aptitude install $p; done

Posted in Linux | Comments Off on deb package tips that are hard to remember: