|
|
Subscribe / Log in / New account

Default "secrets"

By Jake Edge
January 5, 2011

One of the simplest principles of cryptography is that the secret keys which are used for encryption must be kept, well, secret. Exposing the key to anyone other than the intended recipient of the message can pretty obviously lead to a compromise of the encrypted data. So, for example, hardcoding a secret key into a firmware image is unlikely to lead to secure communications using that key. Unfortunately, networking device makers—and the creators of free software firmware replacements for those devices—seem to have missed, or ignored, this basic principle.

The problem stems from the SSL keys that are installed into the firmware images for the devices. In many cases, those keys—including the supposedly private key—are generated when the image is built and then flashed into hundreds or thousands of different devices. If one can get access to the private SSL key, traffic encrypted with it (which might include HTTPS or VPN traffic) can be trivially decrypted. As the announcement of the LittleBlackBox project describes, it is, unfortunately, rather easy to obtain said keys; in fact the project provides a database of thousands of private keys indexed by their public key.

In practical terms, that means an attacker can access a vulnerable SSL-protected web site, retrieve the public key certificate, look up the corresponding private key, and decrypt any traffic that is sent or received by the web site. An attacker could also do a man-in-the-middle attack by pretending to be the site in question, as there would be no way to determine that the spoofer wasn't the real site. In order to do either of those things, though, the attacker must get access to the encrypted data stream.

Open, or weakly secured, wireless networks are the easiest way for an attacker to get that access—or to become a man in the middle. As the concerns over Firesheep have shown, there is still a lot of traffic that travels unencrypted over wireless networks. Ironically, HTTPS is touted as a solution to that problem, but that only works if the private keys are kept secret. For the web applications targeted by Firesheep, that is not likely to be a problem, as their private keys were presumably generated individually and kept safe. But for others who might be using wireless networks to configure their routers—or connect to another network via VPN—it could be a much bigger problem.

While reconfiguring your router from the local coffee shop may be a pretty rare event, even having the HTTPS-enabled web server available over the internet gives an attacker the ability to retrieve the public key, which can then be looked up in the LittleBlackBox database. If that SSL key is used for other things like VPN—something that might well be used at an open WiFi hotspot—that traffic is at risk as well. The right solution seems clear: don't supply default "secrets". In some ways, this problem parallels the longstanding, but hopefully improving, situation with default administrative passwords.

Device manufacturers and firmware projects should not be shipping SSL keys and either generate them at "first boot" or provide a way for users to generate and upload their own keys. There are a few different reasons that it isn't always done that way today, from concerns over devices having enough entropy to generate a random key to the amount of time it can take to generate a key on a slow CPU, but those reasons aren't really offset by the damage that could be done. Users who enable HTTPS access to their devices do so with the idea that it will be more secure, and can be used in places where unencrypted communication doesn't make sense.

There are also hurdles to overcome in either creating a key for each device (and/or firmware image) or providing instructions for users but, once again, that really doesn't help users that are relying on the device for secure communications. While some in the DD-WRT community don't see it as a big problem it is likely more serious than they are crediting. It would make far more sense to disable HTTPS access entirely—perhaps requiring a manual process to generate keys and enable that access—than it does to provide well-known keys.

While the problem highlighted by LittleBlackBox isn't earth-shattering, it does show the sometimes cavalier attitude towards security that is shown by some in the embedded device arena. When you are selling (or providing) a device or firmware that is meant to secure someone's network, it makes sense to proceed carefully. And to keep secrets, secret.


Index entries for this article
SecurityEmbedded systems
SecurityEncryption/Key management


to post comments

Default "secrets"

Posted Jan 6, 2011 5:31 UTC (Thu) by adamgundy (subscriber, #5418) [Link] (23 responses)

the problem with generating the key on first use is that you get warnings from web browsers about 'invalid' (unsigned) certificates.

the reason all the embedded devices are shipped with hardcoded keys is that the vendors have paid for a signed cert...

Default "secrets"

Posted Jan 6, 2011 10:54 UTC (Thu) by Fowl (subscriber, #65667) [Link] (2 responses)

Never seen that before. All I've seen have been self signed, or involve getting a subdomain and cert from the vendor. (ie. Windows Home Server)

Having thought about how else it could "worl" for a while, the only two ways I can think of are:

The vendor purchasing a FQDN. getting a CA signed cert for that domain, put that cert in the firmware image, then either:

* pointing it to a RFC 1918 address (internal, eg. 192.168.1.1)
* configuring the device to engage in some sort of dns spoofing.

All of which.. seems bad.

Am I close?

Default "secrets"

Posted Jan 6, 2011 17:35 UTC (Thu) by adamgundy (subscriber, #5418) [Link]

if I remember the Slashdot discussion correctly, I think they're shipping with signed keys for the default IP address, so eg https://192.168.0.1/ doesn't complain.

Default "secrets"

Posted Jan 7, 2011 11:48 UTC (Fri) by james (subscriber, #1325) [Link]

The domestic routers I've seen ship with DNS and DHCP servers enabled, and the DHCP tells clients to use the router as DNS server. That gives the router a clean way of resolving special domain names itself.

I imagine few users actually bother setting up static IP addresses, and many of those that do still use the router for DNS resolving (you don't know when your ISP is going to change their setup).

Shipping SSL enabled devices

Posted Jan 6, 2011 13:35 UTC (Thu) by alex (subscriber, #1355) [Link] (8 responses)

I have the same problem. We ship a pre-configured server that has a web-based component. Some users would like all that traffic to be encrypted via SSL but that will open a flood of support calls when they complain the browser says the connection is insecure.

We ship a USB with these systems for re-install, however I guess we'll have to make a custom key for every customer with their own unique signed certificates on them. I bet they won't keep the key secure either.

I wish their was a way to do it the SSH way, i.e. you've seen this machine once before so you can be sure it's the same machine.

Shipping SSL enabled devices

Posted Jan 6, 2011 17:40 UTC (Thu) by adamgundy (subscriber, #5418) [Link]

same problem here. we support SSL, but customers generally don't have the knowledge to self-sign, and don't want the cost (or more likely haven't got the skillz) to buy a signed cert from someone (also, it's more tricky to get a signed cert for a LAN only machine). end result is typically it doesn't get used.

Shipping SSL enabled devices

Posted Jan 7, 2011 8:26 UTC (Fri) by madhatter (subscriber, #4665) [Link] (6 responses)

There is, that's what "permanently store this certificate" is for. The sadness is that even browsers that allow you to do that make such a bloody song-and-dance about it, causing (as you say) support calls; whereas ssh accepts that this is a normal operational mode for self-signatures, pops up a simple text message that doesn't give a "sky is falling" impression, and gets on with it.

I agree that browsers handle this badly, but the better ones do handle it.

Shipping SSL enabled devices

Posted Jan 7, 2011 13:57 UTC (Fri) by ballombe (subscriber, #9523) [Link] (5 responses)

I do not think any browser associate the certificate with the IP address.
They only store the certificate, which is much less secure.

Shipping SSL enabled devices

Posted Jan 7, 2011 14:38 UTC (Fri) by madhatter (subscriber, #4665) [Link] (4 responses)

Im not sure I accept that ssh does either. If you access a remote host by FQDN, then the host name is what's stored in known_hosts, along with the public key (at least, this seems to be so for my ssh, which is OpenSSH_5.5p1). ssh *can* cache an IP address, to be sure, but for people making use of the DNS, I'm not sure it does.

Similarly, once you tell the browser to cache a certificate, the certificate has the FQDN for which it's valid embedded inside itself (as the CN). That certificate, cached in a trusted cache though it be, can't be used to authenticate another site, even one using the same keypair (which shouldn't happen).

The two situations seem remarkably similar to me.

Shipping SSL enabled devices

Posted Jan 7, 2011 15:04 UTC (Fri) by rfunk (subscriber, #4054) [Link] (3 responses)

SSH stores both the name and address. When the address changes but the name and key remain the same, I get an alert about it.

Shipping SSL enabled devices

Posted Jan 7, 2011 15:31 UTC (Fri) by madhatter (subscriber, #4665) [Link] (2 responses)

I beg to differ, at least partly. "foo" is a disposable /etc/hosts entry for my laptop; risby is my desktop.

[madhatta@risby madhatta]$ ping foo -c 1
PING foo (192.168.3.202) 56(84) bytes of data.
64 bytes from foo (192.168.3.202): icmp_req=1 ttl=64 time=0.290 ms
[...]
[madhatta@risby madhatta]$ ssh foo
madhatta@foo's password:
Last login: Fri Jan 7 15:16:47 2011 from risby.home.teaparty.net
[madhatta@anni ~]$

log out, reIP foo to 192.168.3.203, update risby's /etc/hosts, and try again:

[madhatta@risby madhatta]$ ping foo -c 1
PING foo (192.168.3.203) 56(84) bytes of data.
64 bytes from foo (192.168.3.203): icmp_req=1 ttl=64 time=1.50 ms
[...]
[madhatta@risby madhatta]$ ssh foo
Warning: Permanently added the RSA host key for IP address '192.168.3.203' to the list of known hosts.
madhatta@foo's password:
Last login: Fri Jan 7 15:16:58 2011 from risby.home.teaparty.net
[madhatta@anni ~]$

I see no alert. I do see a warning that a key has been cached against a new IP address, but when I repeated this test (with that key then cached against the name and both IP addresses) I saw no message whatsoever.

I accept that keys are stored against ip addresses as well as against names, but I don't accept a general assertion that when "the address changes but the name and key remain the same, I get an alert about it". When the address is novel for that name, yes; other times, no.

Cacheing an SSL certificate in a browser creates an entity that links a public key and a domain name. SSH goes further than this, I accept, but it doesn't go all the way.

Remember that the original comment that started me off was

> I wish their was a way to do it the SSH way, i.e. you've seen this
> machine once before so you can be sure it's the same machine.

I am not yet convinced that "permanently store this certificate" is not such a mechanism.

Shipping SSL enabled devices

Posted Jan 7, 2011 17:21 UTC (Fri) by giraffedata (guest, #1954) [Link] (1 responses)

Warning: Permanently added the RSA host key for IP address '192.168.3.203' to the list of known hosts.

...

I see no alert. I do see a warning that a key has been cached against a new IP address,

You don't mean cached. The list of known hosts is not a cache. A cache is a local copy you keep to accelerate future lookups; the list of known hosts has an entirely different purpose.

It's interesting to see the detail that you can switch back to a previously seen IP address and SSH won't issue a scary message, but I'm not sure that affects any of this discussion, because the scary message on the original change is enough to trigger all the concerns.

SSH is wrong to do this, by the way. The whole point of SSL is that you don't trust the IP network routing, so you authenticate an identity that is independent of that. And the whole point of DNS is that you can move a server to another IP address (as you often must to change its physical location) and users don't see a change in identity.

And even if SSH is concerned the public key encryption could be broken and wants to offer the additional security of telling you the name resolution changed, it shouldn't associate the IP address with the key, but rather with the FQDN, resulting in the message, "Warning: adding IP address 192.168.3.203 to the list of IP locations for foo".

Shipping SSL enabled devices

Posted Jan 7, 2011 19:11 UTC (Fri) by madhatter (subscriber, #4665) [Link]

According to wikipedia, you are right, I don't mean cache. I hadn't been aware that I was misusing that term, and will try to avoid doing so in future - thanks for that! - though now I need a word to describe what ssh is doing.

In fairness to ssh, as I demonstrated above, it is doing exactly what you asked it to: putting up a message when it associates a new IP address with a known host name. But I agree the message could be more helpful.

I think this thread is rather separating those like ballombe, who do want to know when the IP of a server offering a service they use changes, from those who don't, like yourself.

I've found this thread most stimulating, and I now find myself having to sit down and think harder about what I want from an authentication service in a world where DNS is not trustworthy.

Default "secrets"

Posted Jan 6, 2011 23:13 UTC (Thu) by iabervon (subscriber, #722) [Link] (10 responses)

It's a good thing we have browsers to tell us that devices that are secure are insecure and devices that are insecure are secure.

The sensible thing for browsers to do with SSL connections to private IP addresses is to (a) insist that they be self-signed certificates, because no CA in their list signing them could possibly be trustworthy; (b) tell the user to refer to the documentation for the device to find out how to verify the certificate; (c) ignore the subject of the certificate, since it's got to be meaningless, and use the fingerprint instead to find it again; (d) store a user-chosen name which will be displayed differently from a PKI-certified name.

Of course, it's a bit unclear how the device should communicate the correct fingerprint to the user. Probably the right way would be to boot the device at the factory, get its fingerprint, and print it on a label.

Default "secrets"

Posted Jan 6, 2011 23:27 UTC (Thu) by adamgundy (subscriber, #5418) [Link] (9 responses)

and all of that means that 99% of users of the device are now either (a) on the phone to your support department because the link they were told to click has thrown up a scary warning, or (b) not using https at all.

how has that improved security?

this is the entire problem. there's a good (in some sense of the word) reason for having these hard-coded, signed keys. the problem is that now it's busted, and there's no clear solution.

there are many people out there that think the whole 'self signed cert' scary warnings are useless, and should be ditched entirely - maybe just don't change the URL bar color if the cert doesn't match - on the grounds that some encryption (without authentication) is a whole lot better than no encryption. that doesn't play well with commercial sites, though, who are paranoid someone's spoofed their DNS and want the browser to throw a scary warning.

Default "secrets"

Posted Jan 7, 2011 0:15 UTC (Fri) by iabervon (subscriber, #722) [Link] (8 responses)

Users are currently using an insecure method to connect to their devices, and being told it is secure. That's a security flaw that browsers are helping to cause. As a minimum, browsers should identify that a device is using a PKI-issued cert for a private identity, and simply tell users that this can't possibly provide any meaningful security.

Personally, I like the method that Chromium uses: if a site is using https in a way that the browser doesn't trust, it crosses out the "https" in the URL in red and acts like it's a normal unsecured connection. It's hard for commercial sites to complain about this, since they don't want the browser to give big scary warnings for their http URLs, which are obviously not protected. But the browser should similarly cross out the "https" in the case where it's a certificate signed by a CA for something that the browser knows the CA didn't verify.

Default "secrets"

Posted Jan 7, 2011 0:17 UTC (Fri) by dlang (guest, #313) [Link] (3 responses)

why do you say that a PKI cert for a private entity cannot possibly be valid? I have quite a few servers in my company that prove different.

Default "secrets"

Posted Jan 7, 2011 3:50 UTC (Fri) by foom (subscriber, #14868) [Link] (2 responses)

Your certs are probably for something like hostname.office.mycompany.com, which is perfectly fine.

A cert for the IP address "192.168.0.1" though, is *NOT* fine, there's no way a CA could possibly verify that you own that address (since, well, you don't).

Default "secrets"

Posted Jan 7, 2011 6:21 UTC (Fri) by dlang (guest, #313) [Link] (1 responses)

certs are generally not issued for IP addresses in the first place, be they public IP addresses or private IP addresses.

Default "secrets"

Posted Jan 7, 2011 6:28 UTC (Fri) by foom (subscriber, #14868) [Link]

Sure they are. Google finds me this, for example:
https://www.globalsign.com/digital_certificate/options/pu...

Default "secrets"

Posted Jan 7, 2011 0:28 UTC (Fri) by adamgundy (subscriber, #5418) [Link]

chrome(ium) also throws up a big red warning page that you have to accept before you can proceed. many (most) users will go no further.

as far as whether a CA verified the IP address.. I don't think it's conclusive that they *didn't* verify it. most of these certs are on (consumer) routers, which have a default IP address. it's not beyond the realm of possibility that a CA verified that the IP address they're signing is the one the router uses. that's just as valid as the 'certification' they do for a domain name by sending out an email to postmaster@... and hoping 'postmaster' doesn't just click the link because 'it looked official' (and yes, I've seen that happen).

this is one of two recent problems that really have no good solution (read: the solutions are very expensive). firesheep being the other one, making session surfing ridiculously easy.

the only real, cost effective solution to these problems is an SSH style 'seen it' key repo in the browsers. the first time you visit a site with a self signed cert (which is otherwise valid), you get a very *non scary* warning that this is the first time you've visited the site. after that, no warnings whatsoever unless the cert changes. the problem with this solution is: IE6, IE7, IE8, Firefox < 4, Chrome < 6, etc, etc will still be throwing fits about 'invalid certs'.

Default "secrets"

Posted Jan 7, 2011 1:11 UTC (Fri) by djao (guest, #4263) [Link] (2 responses)

As a minimum, browsers should identify that a device is using a PKI-issued cert for a private identity, and simply tell users that this can't possibly provide any meaningful security.

Your complaint, while valid, misses the biggest issue. It's a bit like ticketing a drunk driver for a seatbelt violation.

The biggest problem is that browsers are totally and utterly dependent on certificates for authentication. The widespread incorrect belief in the need for certificates represents the single biggest factor in perpetuating exactly the sort of insecure situations that this very article is about.

Do you trust SSH? As others here have pointed out, SSH (in its default configuration) uses no certificates. The program simply caches the key the first time it is used, and warns the user if the key ever changes. The SSH authentication model is nowadays called TOFU or "trust on first use." For someone setting up a wireless router, trust-on-first-use is perfectly fine. A user, even an unskilled one, is generally aware of the fact that they are setting up a router for the first time, and that they might have to click on boxes to accept a key.

There are many other wireless hardware devices with security implications (such as bluetooth keyboards) that already use TOFU authentication with great success. All the posters here who are complaining that it can't be done, that it would generate hundreds of support calls, are simply ignoring the fact that it not only can be done, but already is being done with no problems.

The fault in this case lies squarely with the browser manufacturers, for not supporting TOFU, and more generally for providing no authentication mechanisms whatsoever other than certificates. (Yes, a skilled user can achieve the equivalent of TOFU in Firefox. It takes five mouse clicks worth of scary dialog boxes. This doesn't count as support.) Secondary blame belongs to the companies that generate certificates, for lobbying browsers to require certificates in order to preserve their lucrative protection racket.

Default "secrets" and Trust On First Use

Posted Jan 7, 2011 17:49 UTC (Fri) by scripter (subscriber, #2654) [Link] (1 responses)

TOFU might be a step in the right direction, but it's not going to eliminate scary certificate warning support calls when someone changes out their router for a different model.

Default "secrets" and Trust On First Use

Posted Jan 7, 2011 19:05 UTC (Fri) by iabervon (subscriber, #722) [Link]

Depends; if the big warning is: "This is not your usual router!" the user is going to say, "Well, that's good, because I got a new one." Car companies don't get support calls when people buy new cars and their old car keys don't work any more. It comes back to the fact that browsers give misleading messages about security concerns, based on the assumption that your router is either a bank or someone pretending to be a bank. If they had suitable behavior for talking to network hardware, it would be easy to have a big warning that is either really scary or comforting depending on whether you know that you changed out your router. I mean, if someone else has swapped your router for a different one without your knowledge, your computer probably ought to give you a big scary warning; just because the attacker who has hijacked your connection to your router is using a router you might have bought doesn't make it any better.

Default "secrets"

Posted Jan 6, 2011 10:46 UTC (Thu) by Fowl (subscriber, #65667) [Link] (3 responses)

Perhaps I misunderstand SSL, but I thought that the certificate was only useful to ensure the identity, not to encrypt the session. I mean each session has randomised session keys not based on the private key.

The private key is just to prove that you are the server you say you are, either by a trusted 3rd party you already have the keys for or key continuity management - store the key the first time and hope that your first connection isn't compromised! ("the ssh model")

So yes, having the same private key would in effect allow anyone to pretend to be your device, but without MITM that shouldn't be that useful. That's not to say that it's a good situation, clearly SSL (and SSH!) keys should be generated on first boot, with an opportunity to upload "real" keys.

Or am I on the wrong track entirely?

Default "secrets"

Posted Jan 6, 2011 12:38 UTC (Thu) by erwbgy (subscriber, #4104) [Link] (1 responses)

Perhaps I misunderstand SSL, but I thought that the certificate was only useful to ensure the identity, not to encrypt the session. I mean each session has randomised session keys not based on the private key.

The public and private keys are used when exchanging the session key, so if you have access to the private key then you will be able to find out the session key and decrypt the traffic.

The Wikipedia TLS page explains this well:

In order to generate the session keys used for the secure connection, the client encrypts a random number with the server's public key and sends the result to the server. Only the server should be able to decrypt it, with its private key.

Wrong, see Diffie-Hellman

Posted Jan 8, 2011 20:46 UTC (Sat) by kleptog (subscriber, #1183) [Link]

Well, the GP poster is correct, if Diffie-Hellman is enabled in SSL then you have perfect forward secrecy. In other words, even if someone has the private key and sniffs all the traffic, they *still* can't decrypt it.

http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_e...

It's a neat trick whereby the server and client can agree on a key over an insecure channel.

So this list is useful for MITM attacks but not always useful for eavesdropping. Now, if they have checked all these routers and confirmed that in fact DH is disabled by default, then we have a different problem indeed.

(Incidently, I just tried my own router and Firefox doesn't say whether DH is enabled or not. Maybe that means no.)

For the fun of it, try surfing the web and rejecting any SSL connections that don't use DH. You'd be surprised the number of sites that either (a) are incompetent or (b) want anyone who has the private to be able to sniff your traffic. There are a lot of sites which will accept DH if you ask for it but will default to no.

Default "secrets"

Posted Jan 6, 2011 15:07 UTC (Thu) by jldugger (guest, #57576) [Link]

SSL encrypts both directions of traffic. The first part of this is to establish the identities, usually of the server (the client generally uses a login form to establish their identity). Without SSL encrypting all traffic, someone could potentially steal your session and submit forms on your behalf, ala firesheep.

Default "secrets"

Posted Jan 6, 2011 12:32 UTC (Thu) by NAR (subscriber, #1313) [Link]

There are a few different reasons that it isn't always done that way today, from concerns over devices having enough entropy to generate a random key to the amount of time it can take to generate a key on a slow CPU

I may be wrong here, but most of these devices don't have either a keyboard or a monitor attached to them. In order to configure them, they need to be connected to an other computer (with a keyboard and a monitor) - why not generate the keys there? I presume there's more than enough CPU power and entropy there. I've configured a new WiFi router just last week: I had to connect it with a UTP cable to the computer, put the attached CD into the computer, run the configuring program (of course, on Windows) and that program generated e.g. the WPA2 key. I didn't even need to access the web-based interface.

Default "secrets" on DD-WRT etc

Posted Jan 6, 2011 16:20 UTC (Thu) by rfunk (subscriber, #4054) [Link] (3 responses)

Since the DD-WRT people seem not to care, and likely other affected projects/vendors as well, I'm now wondering if there are projects/vendors that actually do care about this sort of thing. And how quickly I can migrate my routers to their firmware.

Default "secrets" on DD-WRT etc

Posted Jan 7, 2011 9:54 UTC (Fri) by dsommers (subscriber, #55274) [Link] (1 responses)

I personally removed DD-WRT a few years ago when I discovered that there were hard coded ACCEPT rules from specific IP addresses. The forum discussion with the upstream developer did not build up any confidence in my eyes.

The argumentation which was used was that "these IP addresses are not valid any more and we will remove these iptables rules in the next release". That was without an ETA of the next release and it was nobody who saw any need of informing its users about this. Despite that a couple of simple 'nvram' commands was all which would be needed as a workaround.

So that the DD-WRT community does not see littleblackbox as a problem for their firmware, that does not surprise me at all. For me this is yet another reason why to stay away from DD-WRT.

I switched to X-WRT and later on to OpenWRT, and I find these two as much more open and secure router distributions. And it is quite easy to build the OpenWRT firmware yourself.

Default "secrets" on DD-WRT etc

Posted Jan 17, 2011 11:42 UTC (Mon) by eduperez (guest, #11232) [Link]

Probably not-so-related, but OpenWrt generates a new private key for SSH connections upon every firmware installation: I have reinstalled the same OpenWrt firmware on my router several times, and after each installation the SSH client detects a new key.

Default "secrets" on DD-WRT etc

Posted Feb 3, 2011 14:45 UTC (Thu) by ddwrt (guest, #72712) [Link]

Hi,

the stuff here written here that "the DD-WRT people" do not care is not right.

We noticed this article (even now subscribed to lwn) and we'll take care on a solution.

Our main hassle with a solution right now is, that we on most platforms do not have enough space to put openssl for the key (and x509) stuff into the firmware.
Secondly, we don't trust in the right now random quality on embedded systems. (Ok, that is for sure better than having these "secret defaults").

Also we assume, that offering people the service somewhere "out in the web" to generate the keys will also lead into trust problems again.

The idea we right now have, is to use javascript on the browser to generate the RSA (locally) and the x509 certificate.
We found stuff to do the RSA part already, but haven't finished off with the x509 part.


Copyright © 2011, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy