Linux Slashdot

Syndicate content Slashdot: Linux
News for nerds, stuff that matters
Updated: 45 min 6 sec ago

Reglue: Opening Up the World To Deserving Kids With Linux Computers

Wed, 30/07/2014 - 09:17
jrepin writes: Today, a child without access to a computer (and the Internet) at home is at a disadvantage before he or she ever sets foot in a classroom. The unfortunate reality is that in an age where computer skills are no longer optional, far too many families don't possess the resources to have a computer at home. Linux Journal recently had the opportunity to talk with Ken Starks about his organization, Reglue (Recycled Electronics and Gnu/Linux Used for Education) and its efforts to bridge this digital divide.

Read more of this story at Slashdot.








Categories: Linux News

Reglue: Opening Up the World To Deserving Kids With Linux Computers

Wed, 30/07/2014 - 09:17
jrepin writes: Today, a child without access to a computer (and the Internet) at home is at a disadvantage before he or she ever sets foot in a classroom. The unfortunate reality is that in an age where computer skills are no longer optional, far too many families don't possess the resources to have a computer at home. Linux Journal recently had the opportunity to talk with Ken Starks about his organization, Reglue (Recycled Electronics and Gnu/Linux Used for Education) and its efforts to bridge this digital divide.

Read more of this story at Slashdot.








Categories: Linux News

Reglue: Opening Up the World To Deserving Kids With Linux Computers

Wed, 30/07/2014 - 09:17
jrepin writes: Today, a child without access to a computer (and the Internet) at home is at a disadvantage before he or she ever sets foot in a classroom. The unfortunate reality is that in an age where computer skills are no longer optional, far too many families don't possess the resources to have a computer at home. Linux Journal recently had the opportunity to talk with Ken Starks about his organization, Reglue (Recycled Electronics and Gnu/Linux Used for Education) and its efforts to bridge this digital divide.

Read more of this story at Slashdot.








Categories: Linux News

Reglue: Opening Up the World To Deserving Kids With Linux Computers

Wed, 30/07/2014 - 09:17
jrepin writes: Today, a child without access to a computer (and the Internet) at home is at a disadvantage before he or she ever sets foot in a classroom. The unfortunate reality is that in an age where computer skills are no longer optional, far too many families don't possess the resources to have a computer at home. Linux Journal recently had the opportunity to talk with Ken Starks about his organization, Reglue (Recycled Electronics and Gnu/Linux Used for Education) and its efforts to bridge this digital divide.

Read more of this story at Slashdot.








Categories: Linux News

Valencia Linux School Distro Saves 36 Million Euro

Sun, 27/07/2014 - 22:04
jrepin (667425) writes "The government of the autonomous region of Valencia (Spain) earlier this month made available the next version of Lliurex, a customisation of the Edubuntu Linux distribution. The distro is used on over 110,000 PCs in schools in the Valencia region, saving some 36 million euro over the past nine years, the government says." I'd lke to see more efforts like this in the U.S.; if mega school districts are paying for computers, I'd rather they at least support open source development as a consequence.

Read more of this story at Slashdot.








Categories: Linux News

A Router-Based Dev Board That Isn't a Router

Sun, 27/07/2014 - 21:10
An anonymous reader writes with a link to an intriguing device highlighted at Hackaday (it's an Indiegogo project, too, if it excites you $90 worth, and seems well on its way to meeting its modest goal): The DPT Board is something that may be of interest to anyone looking to hack up a router for their own connected project or IoT implementation: hardware based on a fairly standard router, loaded up with OpenWRT, with a ton of I/O to connect to anything. It's called the DPT Board, and it's basically an hugely improved version of the off-the-shelf routers you can pick up through the usual channels. On board are 20 GPIOs, USB host, 16MB Flash, 64MB RAM, two Ethernet ports, on-board 802.11n and a USB host port. This small system on board is pre-installed with OpenWRT, making it relatively easy to connect this small router-like device to LED strips, sensors, or whatever other project you have in mind.

Read more of this story at Slashdot.








Categories: Linux News

Linus Torvalds: "GCC 4.9.0 Seems To Be Terminally Broken"

Sun, 27/07/2014 - 18:40
hypnosec (2231454) writes to point out a pointed critique from Linus Torvalds of GCC 4.9.0. after a random panic was discovered in a load balance function in Linux 3.16-rc6. in an email to the Linux kernel mailing list outlining two separate but possibly related bugs, Linus describes the compiler as "terminally broken," and worse ("pure and utter sh*t," only with no asterisk). A slice: "Lookie here, your compiler does some absolutely insane things with the spilling, including spilling a *constant*. For chrissake, that compiler shouldn't have been allowed to graduate from kindergarten. We're talking "sloth that was dropped on the head as a baby" level retardation levels here .... Anyway, this is not a kernel bug. This is your compiler creating completely broken code. We may need to add a warning to make sure nobody compiles with gcc-4.9.0, and the Debian people should probably downgrate their shiny new compiler."

Read more of this story at Slashdot.








Categories: Linux News

GOG.com Announces Linux Support

Thu, 24/07/2014 - 15:38
For years, Good Old Games has made a business out of selling classic PC game titles completely free of DRM. Today they announced that their platform now supports Linux. They said, We've put much time and effort into this project and now we've found ourselves with over 50 titles, classic and new, prepared for distribution, site infrastructure ready, support team trained and standing by ... We're still aiming to have at least 100 Linux games in the coming months, but we've decided not to delay the launch just for the sake of having a nice-looking number to show off to the press. ... Note that we've got many classic titles coming officially to Linux for the very first time, thanks to the custom builds prepared by our dedicated team of penguin tamers. ... For both native Linux versions, as well as special builds prepared by our team, GOG.com will provide distro-independent tar.gz archives and support convenient DEB installers for the two most popular Linux distributions: Ubuntu and Mint, in their current and future LTS editions.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tue, 22/07/2014 - 17:40
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Categories: Linux News