Professional Documents
Culture Documents
com
The data on your hard drive is the most critical item inside your computer, and the only item which can not be replaced. It may be an unwanted hassle and expense to replace a defective memory module, monitor, or processor, but there is no replacing data once lost. In addition to the possibility of a simple hard drive failure, the threat of internet borne worms and viruses has become an increasing risk to data loss or corruption. Although you may not be able to provide absolute protection to your hard drive, there are various ways that you can ensure that the data on your hard drive is protected. Five methods of backing up your data are summarized below. 1. USB Flash Drives Although I am not recommending that ash drives be used for the actual data storage, they are a convenient means of transferring data from one computer to another. Important les can be quickly loaded onto a device such
geeks.com | techtipsblog.com
available for greater exibility and ease of installation. A combination drive, such as theNU Technology DBW-521, will provide the user a high speed CD reader/writer, as well as a DVD reader, for under $40. The extremely low price of the drive (and the blank media) makes for an inexpensive means of creating data backups, and the re-writable media increases the convenience by allowing the same disc to be erased and reused many times. The main limitation of using a CD writer for data backups is that the discs are generally limited to a capacity of 700MB per disc. Not nearly enough for a full backup, but adequate for archiving key les. The popularity of DVD writers/re-writers has surged thanks to dropping prices, and they are pushing the stand alone CD burner towards extinction. DVD media affords the user far more storage capacity than a CD, and DVD burners can generally burn CDs as wells as DVDs. The recent availability of double layer DVD burners, such as the Sony DW-D22ADO-N, represents a large boost in the capacity of writable DVDs, taking the previous limit of 4.7GB per disc and nearly doubling it to 8.5GB. With proper storage, CD/DVD media can provide long term storage that can not be jeopardized by hardware failure. The data on a CD or DVD can easily be read by just about any computer, making it a good choice for archiving les that arent excessively large. 3. External Hard Drives As the name might imply, external hard drives are generally the same type of drive you might nd inside your system, but housed in a smaller, external enclosure of its own. The enclosure will feature at least one data interface (such as Firewire, USB, or Ethernet), and the capacity is only limited by the size of hard drives presently available and the users
budget. The Ximeta NetDisk is an example of an external hard drive that provides a user the option of connecting an additional 80GB, 120GB, or 160GB of storage to their system by using either a USB 2.0 or Ethernet connection. Installation for such a device is rather simple, and may involve the installation of some basic software, as well as making the necessary connections between the computer and the external enclosure. The capacity of external hard drives makes them ideal for backing up large volumes of data, and many of these devices simplify the process by including software (or hardware) features to automate the backup. For example, some Seagate External drives feature a one-button backup option right on the case.In addition to being a convenient method of backing up large volumes of les locally, most external hard drives are compact enough to be portable. The inclusion of a common data transfer interface, such as USB, allows an external hard drive to be connected to just about any modern computer for data transfer, or for more than one computer to share the external hard drive as a back up. 4. Additional Hard Drives By simply adding an additional hard drive to you system, you can protect yourself from data loss by copying it from your primary drive to your secondary drive. The installation of a second hard drive isnt difcult, but does require a basic understanding of the inner working of a computer, which may scare off some users. We do offer a how-to section on our site for many tasks such as installing a
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
hard drive into a computer system. To take the installation of a second hard drive to another level of security and reliability, the hard drives may be installed in a RAID array. RAID stands for a Redundant Array of Independent (or Inexpensive) Disks, and can be congured in several manners. A thorough discussion of RAID and all of its variations would be an article all by itself, but what may be of interest to this discussion is what is known as RAID 1. A RAID 1 array requires two hard drives of equal size to be installed on a RAID controller, which will then mirror one drive to the other in real time. Many motherboards now come with RAID controllers onboard, but the addition of a PCI slot controller card, such as the Silicon Image Sil0680, is an inexpensive purchase that will add RAID to any system. With a RAID 1 array in place, if one hard drive should ever fail, the system wont miss a best by continuing to run on the remaining good drive, and alert the user that one drive may need to be replaced. 5. Online Storage Online services, such as Xdrive, allow users to upload their les to a server for safe keeping. Although it may be convenient to have the data available wherever an internet connection is available, there are a few limitations.
The services generally charge a monthly fee relative to the amount of storage space required. At Xdrive, for example, 5GB of storage costs $9.95 per month, which can quickly add up to more than one would spend on any of the other options discussed. Security is supposed to be very tight on these services, but no matter how secure it may seem, it is still just a password keeping prying eyes from your potentially sensitive documents. The speed of your internet connection will also weigh heavily on the convenience of your backup, and no matter what type of connection you have; it cant compete with local data transfer rates. Final Words Although not a comprehensive list of options available for backing up your data, the ve items listed provide some simple and relatively affordable means to ensure that your data is not lost. Data loss is an extremely frustrating and potentially costly situation, but one that can be avoided.
Geeks.com 1890 Ord Way, Oceanside, CA 92056 1.760.726.7700 Read more about Computer Geeks at our website: www.geeks.com Buy your desktop computers, notebook computers, refurbished computers, computer parts, and computer cases at the Computer Geeks.
geeks.com | techtipsblog.com
geeks.com | techtipsblog.com
3. Browser Tool Bars A growing trend is for websites to offer a downloadable toolbar for use with Internet Explorer. Many of these toolbars offer unique features intended to enhance the users web browsing experience in different ways, but they generally also include a pop up blocker. Although there are toolbars available from dozens of websites, Google, MSN, and Yahoo are some of the more reputable names with one available. The installation of these toolbars is quick and easy, and the most difcult part may be reading the ne print in the license agreements. Although these toolbars may do an excellent job blocking pop ups, they may also be retrieving data on your web surng / search habits. If you feel a toolbar may be the right solution for you, stick with one from a trusted name, and just be sure to read the ne print. 4. Pop Up Blocker Software Stand alone pop up blocking software is available from dozens, if not hundreds, of different sources. With various interfaces, and prices ranging from free to $30 (and higher), choosing one can be a difcult task. Many of the programs that are not available for free do come with a free trial download, so you can at least get a sense of whether the program is right for you before committing. Some of the options in this category include STOPzilla, Secure IE, Zero Popup and Pop Swatter, to name a few. The main drawback to this type of pop up blocking solution is that you now have another independent application running on your computer. Although they are generally not resource intensive, why run a program to do something that can be handled by one that is already running anyway? Additionally, with so many reliable solutions available to eliminate pop
ups for free, spending money on one is hard to justify. Along with a dedicated pop up blocker, another recommended tact for eliminating pop-ups is eliminating spyware on your computer system. Some pop-up programs use accompanying spyware to target pop ups specically to you and your web surng habits. An excellent, free program for eliminating spyware of all type is Spybot Search & Destroy. 5. Internet Access Software from Select ISPs Some ISPs (Internet Service Providers) now incorporate a pop up blocker with the software they provide to subscribers for accessing the internet. Earthlink, Optimum Online, and AOL are just a few of the larger providers that add value to their packages by adding a pop up blocker. Bundling this functionality with the ISPs base software denitely makes things easy for the subscriber, as there may be no need to nd one elsewhere. In general, these blockers are effective, but are not the most feature rich and may have limited options for customization by the end user. One draw back with ISP provided pop up blockers is that some only work with their service. So, if you ever switch to a new provider, youll need to be prepared to switch to a new pop up blocker as well. Final Words Pop ups are a fact of life on the internet, but that does not mean you need to put up with them. Among the ve general solutions presented above, there are literally hundreds of options available for eliminating the clutter of pop up ads, allowing you to enjoy only the content you intended to see.
geeks.com | techtipsblog.com
Service Pack 2 provides Windows XP with a Windows Security Center, and other key tools, to help protect the users system from unsafe attachments and downloads. This type of protection is one step to prevent viruses and Trojans from slipping onto a users system and wreaking the type of havoc that has become an increasing problem in recent years. One way it does this is through warnings in Internet Explorers Information Bar, which alert a user to potentially unsafe downloads. The suspect content is blocked automatically,
geeks.com | techtipsblog.com
With Service Pack 2, Internet Explorer now features an integrated pop up blocker to help reduce, if not fully eliminate, the presence of those nuisance ads. Congurable from Internet Explorers Tools tab, users can customize their preferences and even turn the pop up blocker off. Considering most pop up blockers require a special toolbar or other application be installed, this one is extremely convenient and easy to use. 4. Increased Privacy Protection Your privacy is protected more so than ever with Service Pack 2 in a few different ways. If items 1, 2, and 3 above werent enough, there is more For example, Windows XP with SP2 now applies security settings to further guard your PC and your private information from exploit via Internet Explorer. the internet. These pop ups are approved/ denied by the user before anything is allowed to happen, and can be done so that a pop up will appear again next time this event occurs, or so that the pop up will never appear again for that particular event. Many users with broadband internet connections have a hardware rewall in their router, but a software rewall such as this is still a good idea. It can protect where the hardware rewall can not, and is particular useful in preventing the system from launching any attacks from Trojans that may have slipped in. 3. Internet Pop Up Blocker The popularity of wireless networking has exploded as the hardware has becoming increasingly simpler to operate and relatively inexpensive. Now the way a user connects their system to a wireless network has been greatly simplied via enhancements found in SP2. The Wireless Network Setup Wizard will lead a user of any expertise through the installation process, and the Microsoft
CLICK HERE FOR TECH TIP ARCHIVES
Another way your privacy is protected is by Outlook Express blocking images within emails that allow spammers to validate your address. Spammers use images that are tagged with unique bits of code, and once the URL of the image sent to you is viewed, the spammers know that they have a valid address, which makes that address more susceptible to future spam. 5. Simplify Wireless Networking
geeks.com | techtipsblog.com
Broadband Network Utility will help them monitor and maintain the network just as easily. Application of security settings is obviously a main component of these improvements, insuring that the users system is protected from this angle of attack as well. Final Words The release of Service Pack 2 for Windows XP brought about many more changes to the operating system than the ve listed above,
all of which have to be seen as welcome improvements. With a focus on protecting the end users computer system and data, there are enough good reasons to persuade a user to take the plunge and let Windows Update install SP2.
Geeks.com 1890 Ord Way, Oceanside, CA 92056 1.760.726.7700 Read more about Computer Geeks at our website: www.geeks.com Buy your desktop computers, notebook computers, refurbished computers, computer parts, and computer cases at the Computer Geeks.
geeks.com | techtipsblog.com
geeks.com | techtipsblog.com
designed to t the standard ATX and micro ATX (mATX) form factor cases. A typical ATX power supply, such as this Echo-Star 680W unit, measures 3.25 x 6 x 5.5 and features two cooling fans to not only cool the power supply, but to also help draw hot air out of the computer. A typical mATX power supply, such as this A-Power 320W unit, measures 2.5 x 5 x 4 and due to the smaller size features just one cooling fan. mATX cases are generally much smaller than ATX cases, and therefore have smaller power supplies, with generally lower power ratings, and fewer connectors. The connectors on a power supply also deserve consideration. Most power supplies come with what looks to be an electric octopus of wires hanging off the back surface, and you need to make sure that somewhere in that tangled bundle are all of the connectors you need. The power supply should at least have as many connections as the number of drives, cooling fans, and other items found in the case. Up until recently power supplies had a xed number of connections, and if you needed more, you needed to use splitters to distribute the power to all the components. Modular power supplies, such as the Ultra X-Connect 500W unit, are now available that eliminate that electric octopus all together, and allow the end user to connect just the cables they need. The exibility of a modular power supply design not only lets you customize the connections to your needs, it also makes for a simple and tidy installation, since there are no extra wires dangling inside the case. The selection of a high quality power supply may cost more money up front, but down the road it could wind up saving money. Many manufacturers now offer power supplies that consume less energy thanks to high quality internal components, advanced designs, and active power factor correction. These units are now able to provide the same power to the components in a computer, but due to increased efciency, draw less power from the electrical outlet.
Surge Protectors
Surge protectors are intended to protect your electronics from a brief increase in voltage caused by such things as lightning, rolling blackouts, and heavy drawing electrical equipment. A surge protector reacts to divert the extra electricity to ground, and thus protects your expensive computer equipment from damage. A surge is any increase lasting three nanoseconds or longer, so a surge protector needs to react quickly. Most surge protectors also include a fuse (or breaker), and if the surge is too great to be handled without interruption, the fuse will blow. Although the fuse may be destroyed, its a small loss compared to what it may have saved. Surge protectors come in all shapes and styles. Some basic models can even be found at your local dollar store, but offer no more than a few outlets connected to a breaker. No serious protection is obtained, but many people just want more outlets, not protection. More serious surge protectors will probably cost a bit more than a dollar, but will offer some peace of mind that your equipment is actually being protected. In addition to protecting from electrical surges, some devices include extra features such as conditioning to lter out line noise and ports to protect other lines such as cable television, telephone, and networking. The Fellowes Smart Surge Power Strip protects up to 10 devices from surges, as well as offering line conditioning and ports to protect your phone line. A highly appealing feature of such a surge protector is that 4 of the ports are designed to accept bulky AC adaptors. For those with surge protectors that werent
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
designed to be this user friendly, there is still hope in the form of Power Strip Saver Cables. Basically just 7 inch long extension cords, these items can come in very handy when trying to connect multiple AC adaptors to a more traditional surge protector. Uninterruptible Power Supplies
a specic voltage. A run time after a power failure is also generally specied for a UPS based on a full load being placed on the device. Selecting a UPS needs to be based on the intended use A smaller unit, such as the Fenton 600VA, would be adequate for powering a typical desktop computer, monitor, and smaller peripherals such as a printer and lighting for what they rate as 15-23 minutes at full load. If multiple systems need to be powered, or perhaps there are plans for future expansion that will add to the power demands, a larger unit such as the Tripp Lite 1500VA may be more appropriate. The rst two units are intended to be set in close proximity to the devices to be powered, perhaps on the oor behind a desk, but if you are seeking to add a UPS to a server, there are also rackmount solutions such as the Opti-UPS 1100VA. No matter the application, sizing a UPS may seem overwhelming. One manufacturer, APC, has created a handy UPS Selector Application which will take some of the guess work out of choosing the right UPS for any particular application. Final Words A computer system is only as strong as its weakest feature, and many times that distinction falls on the power supply and related components. By choosing a quality power supply, surge protector, and perhaps a UPS, one can make sure that they have adequately strong and stable power to keep their system running now, as well as down the road when upgrades may increase the demand on their system.
Many people familiar with Uninterruptible power supplies (UPS) know that they can keep a computer up and running during a total power failure, but dont know what else they do. Most UPSes will also provide protection from voltage surges and sags (when the voltage drops below normal), as well as protection from the possibility of a shift in the electricitys frequency. UPSes come in two varieties, standby and continuous, although standby versions are far more common and less expensive. A standby UPS allows the devices connected to it to run from the normal electrical connection until a loss of power is detected, at which point it quickly (in a matter of milliseconds) switches to the battery backup of the UPS. A continuous UPS always has the devices connected to it running off of battery power, while the batteries are recharged by the normal electrical connection. UPSes are sold in terms of their capacity, in terms of VA (voltage multiplied by amperage). This implies that devices connected to the UPS can draw a maximum of so much amperage at
geeks.com | techtipsblog.com
As the technology has improved and the prices have come down, LCD (Liquid Crystal Display) monitors have rapidly been replacing CRT (Cathode Ray Tube) monitors on desktops around the world. ComputerWorld rst reported that LCD sales would surpass CRT sales for the rst time in 2003, a lead that it didnt hold for good. But according to DisplaySearch, a at panel display market research and consulting company, the sales of LCD monitors regained the lead over CRT sales in the third quarter of 2004, a lead that it should eventually hold for good. The question is why choose LCD over CRT? There are several pros and cons to consider, and the few items listed below will be considered in this Geek Tip. Price Size Image Quality Energy Consumption Personal Comfort Response Time
geeks.com | techtipsblog.com
bulky style of housing. For example, the manufacturers web page lists this ACER 19 LCD monitor as having a depth of a mere 6.9 (including the base) and a weight of 12.1 pounds. As a point of reference, a 19 ACER CRT is signicantly larger with a depth of 16.86 and a hefty weight of 46.31 pounds. Desktop real estate is precious, and an LCD will require only a small fraction of the depth that a CRT would require. And if there isnt even enough room on your desk for a slim LCD monitor, the low weight makes them perfectly adaptable to be hung on the wall, or off of a radial arm mount, such as this one from Ofce Innovations. Image Quality Image quality is generally considered to be better on an LCD, as each pixel is generated by a specic set of transistors in the screen, which produces a crisp image. But some features that fall under the general heading of image quality might not favor an LCD, including viewing angle, brightness, and contrast. Early LCD monitors had a fairly narrow viewing angle that made clearly seeing the screen from anywhere but directly in front of it difcult. This has improved greatly, but still doesnt quite rival the viewing angle of CRTs which provide the same picture quality regardless of the angle. A monitor with a maximum vertical viewing angle of 120 degrees should not be hard to nd at this point, with many monitors now being able to provide an even greater angle.
Brightness is an area that LCD monitors may have the edge over CRTs, but it varies widely from unit to unit. The standard measure for brightness is referred to as nits, which have units of cd/m2 (candelas per square meter), where a higher number is better. Looking at the three 17 LCD monitors currently available from geeks.com as examples shows two with brightness specications of 400 cd/m2 and one with a brightness specication of 250 cd/m2. As a comparison, the typical CRT monitor may provide half the brightness of an LCD, as conrmed at Viewsonics Monitor University. Contrast is similar to brightness in the fact that it varies widely from unit to unit, and is a specication where a higher number is desired. The contrast is represented as a ratio, where higher numbers imply that bright colors can be displayed next to dark colors without them appearing washed out. Monitors with lower numbers in the ratio may also result in dark shades being displayed as just black, and any detail in these areas may be lost. As a point of reference, CRT monitors may have contrast ratios around 700:1, and using the three 17 LCD monitors currently available from geeks.com as examples shows two with contrast ratios of 450:1 and one with a contrast ratio of 400:1. 400:1 and 450:1 are quite respectable values for LCD monitors, but CRTs may still have the edge in this department. Energy Consumption LCD monitors denitely hold the edge over CRT monitors when it comes to being energy efcient. The huge tube in a CRT monitor is the source of most of its energy consumption, and a comparably sized LCD may use just a fraction of the electricity. Taking a look at this 19 Jetway LCD monitor shows that it consumes 48 Watts during normal operation, which is less than your typical light bulb. In contrast, a 19 CRT such as this one from Viewsonic may draw up to 160 Watts. ThereCLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
fore the fraction of electricity used in this case is 3/10, and could translate to noticeable savings on your electric bill. Personal Health and Comfort The main benet that LCDs have when it comes to comfort is the reduced strain on your eyes. The reduced glare on the screens surface, and the elimination of a typical CRTs refresh, can prevent your eyes from getting tired from extended use. A CRT monitor redraws the image on the entire screen as it refreshes, whereas an LCD monitor only changes the necessary pixels during a refresh. There may also be the unquantiable effect of reduced electromagnetic emissions on LCD monitors. The exact impact of electromagnetic emissions may not be fully understood, but in general less is considered to better, as addressed in this article. And, your back may also appreciate an LCD when it comes time to move, as the example above shows a 19 LCD monitor weighs about as much as its CRT counterpart. Response Time The transistors that create the image on a TFT LCD can be a bottleneck to its performance, especially in fast paced 3D games where speed is critical. Related to the different approach taken with screen refreshes, the amount of time it takes the pixels to change in order to display the new image is referred to as the response time. If the response time is too slow, one may experience blurred images or
ghost effects where the previous image is still slightly visible with the new image. LCD monitor response times have greatly improved over the past few years, and many LCDs are now fast enough to consider for serious 3D gaming use, but specications still vary from unit to unit. A few years ago a typical response time on an LCD monitor may have been anywhere from 30 to 50 milliseconds, and today these numbers can get down into the single digits, with anything 25 milliseconds or less being quite common (lower is denitely better). Using the three 17 LCD monitors currently available from geeks.com as examples shows two with response times of 25ms and one with a response time of 16ms. Final Words In addition to some of the positives mentioned, many LCD monitors now incorporate other features to make them more practical and even fun. LCD monitors can now be found with integrated USB hubs, stereo speakers, and TV tuners (such as this 15 Sharp unit), and for the right price HDTV is even an option. LCD monitors will continue to replace CRTs as they become less expensive and the many benets are realized by consumers, but CRTs wont disappear all together as many situations require the performance that LCDs currently cant provide.
Computer Geeks is more than just a great source for computer gear and consumer electronics, were also a community of tech-enthusiasts excited about teaching and helping others learn. Weve developed Tech Tips because we believe that by providing our guests with tutorials, instructions, directions, and other learning tools they need to become educated consumers, theyll keep coming back.
geeks.com | techtipsblog.com
In the past, computer cases were all very similar. Clones of the same boring, beige box. With all of the choices available today, this is no longer the case, and people can use their systems chassis as a means to express themselves and to set their system apart from the rest. Although appearance may be a big one, it isnt the only factor in the selection process and the following items should be considered when shopping for a new computer case. 1. Form Factor There are different sizes of motherboards, which in turn require different cases to house them. Case form factors share the names of the motherboards they support, and some of the common ones include ATX, Micro ATX (mATX), FlexATX and Mini ITX. ATX motherboards are perhaps the most common, and the largest of the four, measuring at most 12 x 9.6 (305mm x 244mm). A Micro ATX board is at most 9.6 x 9.6 (244mm x 244mm), a FlexATX is 9.0 x 7.5 (229mm x 191mm) and
geeks.com | techtipsblog.com
provide more room for multiple drives and other peripherals, and a smaller motherboard may be better suited to a larger case in a system such as this. 2. Size Size may go along with form factor in m a n y respects, but even while considering cases of the same form factor, there can be variations in size in a few respects. Areas where size can vary are in overall dimensions, the number of exposed 5.25 and 3.5 bays, and the number of internal bays. ATX cases obviously need to be large enough to hold an ATX motherboard; some are just large enough, while others seem cavernous in comparison. If a case needs to t under a low shelf, or between items of a certain width, it is important to choose an appropriately sized case. Cases come in two basic congurations when it comes to their size and shape, either desktop or tower. Desktop cases are wider than they are tall and are oriented so the motherboard lays at, while tower cases have the motherboard standing upright, and come in three basic heights. mini tower, mid tower, and full tower. Tower cases are more common these days, and currently the only style in the Computer Geeks case inventory. The number of exposed drive bays is generally in direct proportion to the overall size of the case. A higher number of exposed 5.25 bays may be desirable for those with more than one DVD or CD drive, removable drive racks, and fan controllers. Exposed 3.5 bays are generally occupied by oppy drives, Zip drives, fan controllers, and things like this 9-in-1 Card Reader, and in most cases you may get one or two of these bays, maximum. This case is very
similar in appearance to this other one, but they have one difference that may prove to be a huge factor. They both have four exposed 5.25 bays, but one has two exposed 3.5 bays while the other only has one. If a user had a oppy drive and the 9-in-1 card reader, they would either have to choose to install only one, or use an adaptor and take up one of their 5.25 bays. Internal bays are generally reserved for hard drives, and systems with multiple drives require the necessary space. So, if a user decided he really wanted a yellow colored case, but needed room for ve hard drives, he would be forced to choose this one (5 internal drive bays) over this one (4 internal drive bays). 3. Cooling Cooling is a critical feature to consider when selecting a computer case. High end systems can generate a good deal of heat, and the case needs to be adequately cooled to keep the system running and stable.
The basic conguration for case cooling involves having one intake fan on the lower portion of the front surface, and one exhaust fan higher up on the rear surface. This allows cooler air to be drawn in, passed over the various heat generating components, and exhausted out the back. There are many other cooling congurations available that may provide improvements in terms of cooling performance and noise. One way to decrease noise, and perhaps move more air, is for a case to use 120mm (4) fans instead of the usual 80mm (3) fans, as larger fans dont need to spin as fast to push the
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
same volume of air. This A-Top Z-Alien utilizes a 120mm exhaust fan that also features another key feature to good cooling. The fan grill is very open, meaning that there will be minimal resistance to air ow and reduced noise as the air rushes past it. Many fan grills are made from perforating the cases sheet metal, and they do not provide enough open area for good airow. Another approach to better cooling is to throw more fans at the heat. This Matrix case adds another fan to the side panel which will draw cool air in right on top of the processor and video card, two of the hotter items in a system. Other cases will add an exhaust fan to the top of the case, which pushes the heat out just like a chimney. No matter the approach, cooling is one area that needs close consideration when it comes to cases intended for todays high powered systems. 4. Installation Features
savers! Although the listing on the Computer Geeks site does not specify it, this X-Blade ATX case features both a removable drive cage and tool-less drive rail system, according to this review. 5. Convenience Items It is not enough for a case to house a computer system any more, it now needs to multi-task. Having regularly used connections on the front or top of the case is one common convenience feature that many people look for. Cases such as this A-Top Z-Alien model let users forget about the annoyance of reaching around the back of their case to plug things in, as USB, Firewire, headphone and microphone jacks are located on the top. Other cases are available that take convenience to another level by including clocks, digital thermometers that monitor specic components, and fan controllers to help maintain a healthy balance between noise and cooling performance. 6. Style
Installing a system into a case can be a time consuming affair, which can become annoying to those who nd themselves in a continuous cycle of upgrading. Many cases now include convenient features to make installation much simpler, and far less time consuming Some of these convenient installation features include a removable motherboard tray, removable drive cages, tool-less expansion card mounts, tool-less side panels, and tool-less drive rail systems. Being able to remove the motherboard tray and drive cage makes it easier to work on those specic areas in the open, and having a tool-less system for mounting drives or cards means there is no need for screws or a screw driver. Denitely time
A few years ago cases only came in one color and one basic style. plain beige boxes. If youre nostalgic for the olden days of computer cases, Computer Geeks still has one for sale in this style, the KG-200. But cases now come in styles from mild to wild, and in a whole rainbow of colors. Some have large windows in the side panel to show off the cases insides, some include special lighting effects, and some have appearances that might scare the kids. At this point there seems to be few limits in case
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
design, but there are always classically styled cases in updated color schemes for those who want something modern looking, but nothing too intense. 7. Power Supply Many cases are sold with a power supply included, but this power supply might not be the correct one for the system to be installed inside of it. An adequate power supply needs to be chosen to meet the demands of the system, and this may very well mean buying an additional power supply to replace the one included, or selecting a different case with a more appropriately sized power supply. For example, someone may decide their high end gaming system would go well in this black ATX case, but the included 300W power supply may not be strong enough for a top notch graphics card, multiple drives, water cooling,
and other power hungry peripherals that might be installed. Final Words There are many factors that go into selecting an adequate case for a computer system, including the seven mentioned above. What may wind up being the most important factor was not discussed, but can hopefully be addressed by balancing the importance of these factors. price. Computer cases can cost anywhere from several dollars to several hundred dollars, meaning that a tight budget may decide which of the other features is really all that important.
Geeks.com 1890 Ord Way, Oceanside, CA 92056 1.760.726.7700 Read more about Computer Geeks at our website: www.geeks.com Buy your desktop computers, notebook computers, refurbished computers, computer parts, and computer cases at the Computer Geeks. Get FREE Geeks.com Tech Tips Newsletter, discounts, and more! Every week, we mail out the
latest Tip to our loyal Geekmail subscribers. If you would like to get the weekly Tech Tips Newsletter delivered to you by email for FREE, just join the geeks.com email list.
geeks.com | techtipsblog.com
The number of different formats available in DVD drives can be confusing to anyone in the market for one. The list is much longer, but to address a few of the common formats, we have DVD-ROM, DVD-R, DVD-RW, DVD+R, DVD+RW, DVD-RAM ,DVD+R DL and DVDRW. Wow! This list of common formats is long enough, no wonder its confusing! Whats with all the Formats?! The reason for various recordable DVD formats is that no one group owns the technology and different Groups have chosen to support one technology over another. There is no industrial standard for manufacturers to reference, so for the time being consumers will have a few choices. The rst thing to address is DVD itself, which stands for Digital Versatile Disc. Some may argue that the V stands for Video, but with the capability to store video, audio, and data
The DVD-R/-RW format was developed by Pioneer, and was the rst format compatible with stand alone DVD players. The group that promotes the technology calls itself the DVD Forum, which is an international association
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
of hardware manufacturers, software rms, content providers, and other users with notable members such as Hitachi, Samsung, and Toshiba. The DVD-R/-RW format is based on CD-RW technology and uses a similar approach to burning discs.
The +R/+RW and R/-RW formats are similar, and the main difference DVD+R technology has is the ability to record to multiple layers (with its new DVD+R DL format), where DVDR can only record to one layer (not all +R drives are capable of dual layer burning, but no -R drives are). The Plextor PX-504U is an example of an external DVD+R/+RW drive capable of recording single layer discs in the +R/+RW format, but also able to read discs recorded by a DVD-R drive. What is DVDRW? DVDRW is not actually a separate format, but the designation given to drives capable of both R/RW and +R/+RW operation. This type of drive is typically called a Dual Drive (not to be confused with a Double Layer drive) since it can write to both the +R/+RW and R/ RW formats. The Samsung TS-H552 is a DVDRW drive capable of reading and writing every format discussed so far, and then some. It takes advantage of DVD+R DL (Double Layer) technology available with the +R format, allowing the appropriate media to store virtually double the 4.37 GB capacity of a typical single layer disc. The other main thing to consider with DVD burners is selecting the correct media. Media for DVD-R, DVD-RW, DVD+R and DVD+RW media may all look the same, but they are slightly different in order to match the specic recording formats. The price of media for either format is generally the same, with RW media costing a good deal more than R media of either format. Double Layer media is even more expensive, and is the only way for an owner of DVD+R DL drive to take advantage of the tremendous capacity increase. As the amount of Double Layer drives increase in the market, the price of the DVD+R DL media is expected to fall with increased production of the media. DVD Burners (as these drive are often referred to) can be picky about the media supported, so be sure to choose your
CLICK HERE FOR TECH TIP ARCHIVES
The DVD+R/+RW format is a newer format, also based on CD-RW technology, and compatible with a large percentage of stand alone DVD players. The +R/+RW technology is not supported by the DVD Forum, and its main backing comes from a group called the DVD+RW Alliance. The Alliance is a voluntary group of industry-leading personal computing manufacturers, optical storage and electronics manufacturers with members such as Dell, Hewlett Packard, Sony, and Phillips Electronics.
The DVD-RAM format is based on PD-RW (Phase-Differential) drives, and actually uses a cartridge to hold the media (just like its PDRW predecessor). Some DVD-RAM cartridges are double sided, making them ideal for companies to use as system backup, hence DVD-RAM is usually found only in commercial applications, and most end-users wont ever need to use or see this type of drive. The DVDRAM standard is also supported by the DVD Forum just like the DVD-R/RW format. However, because of its use of a cartridge (limiting its compatibility), and the scarcity and price of the media used, DVD-RAM is a distant third when compared to the DVD+R/ +RW and DVD-R/RW technology.
geeks.com | techtipsblog.com
DVD-ROM: Reads DVD discs DVD+R: Writes to DVD+R media (will also typically write to CD-R and CD-RW media) DVD+RW: Writes to DVD+RW media (will also typically write to DVD+R, CD-R and CD-RW media) DVD+R DL: Writes to DVD+R DL (Double Layer) media (will also typically write to DVD+R, DVD+RW, CD-R and CD-RW media; many Double Layer drives are ALSO dual drives that is, able to write to BOTH +R/RW and R/RW media) DVD-RAM: Writes to DVD-RAM cartridges (not in wide use on consumer market mainly a business format; can also read PD-RW discs. Will not usually be able to write to any other format including CD-R or CD-RW) DVD-R: Writes to DVD-R media (will also typically write to CD-R and CD-RW media)
DVD-RW: Writes to DVD-RW media (will also typically write to DVD-R, CD-R and CD-RW media) DVDRW: Writes to DVD-RW and DVD+RW media (will also typically write to DVD-R, DVD+R, CD-R and CD-RW media; typically called Dual Drives since it can burn to two different DVD formats)
Final Words This article took a look at the more common formats of DVD drives in order to shed some light on all the choices available. The differences between them all may be subtle, but the compatibility issues can be quite frustrating. The simple answer to anyone considering a drive is to forget about + and by themselves, and shoot for universal compatibility with a good DVDRW with DVD+R DL support.
Geeks.com 1890 Ord Way, Oceanside, CA 92056 1.760.726.7700 Read more about Computer Geeks at our website: www.geeks.com Buy your desktop computers, notebook computers, refurbished computers, computer parts, and computer cases at the Computer Geeks. Get FREE Geeks.com Tech Tips Newsletter, discounts, and more! Every week, we mail out the
latest Tip to our loyal Geekmail subscribers. If you would like to get the weekly Tech Tips Newsletter delivered to you by email for FREE, just join the geeks.com email list.
geeks.com | techtipsblog.com
The performance of computer systems has been steadily increasing as faster processors, memory, and video cards are continuously being developed. The one key component that is often neglected when looking at improving the performance of a computer system is the hard drive. Hard drive manufacturers have been constantly evolving the basic hard drive used in modern computer systems for the last 25 years, and the last few years have seen some exciting developments from faster spindle speeds, larger caches, better reliability, and increased data transmission speeds.
ATA vs SATA
Committee (the group responsible for the ATA standard)) which supports data transfer rates up to 133MB/sec. This is expected to be the last update for the parallel ATA standard. As long ago as 2000 it was seen that the parallel ATA standard was maxing out its limitations as to what it could handle. With data rates hitting the 133MB/sec mark on a parallel cable, you are inviting all sorts of problems because of signal timing, EMI (electromagnetic interference) and other data integrity issues; thus industry leaders got together and came up with a new standard known as Serial ATA (SATA). SATA has only been around a few years, but is destined to become the standard due to several benets to be addressed in this Tech Tip. The two technologies that we will be looking at are: ATA (Advanced Technology Attachment) a 16-bit parallel interface used for controlling computer drives. Introduced in 1986, it has undergone many evolutions in the last 18+ years, with the latest version being called ATA-7. Wherever an item is referred to as being an ATA device, it is commonly a Parallel ATA device. ATA devices are also commonly called IDE, EIDE, Ultra-ATA, Ultra-DMA, ATAPI, PATA, etc. (each of these acronyms actually do refer to very specic items, but are commonly
CLICK HERE FOR TECH TIP ARCHIVES
The drive type used most in consumer grade computers is the hardy ATA type drive (commonly called an IDE drive). The ATA standard dates back to 1986 and is based on a 16-bit parallel interface which has undergone many evolutions since its introduction to increase the speed and size of the drives that it can support. The latest standard is ATA-7 (rst introduced in 2001 by the T13 Technical
geeks.com | techtipsblog.com
interchanged) SATA (Serial Advanced Technology Attachment) a 1-bit serial evolution of the Parallel ATA physical storage interface. Basic Features & Connections
at ribbon, but many prefer the improved system air ow afforded, ease of wire management, and cooler appearance that come with them. SATA drives, such as this 120GB Western Digital model, have a half inch wide, 7 blade and beam data connection, which results in a much thinner and easier to manage data cable. These cables take the convenience of the ATA rounded cables to the next level by being even narrower, more exible and capable of being longer without fear of data loss. SATA cables have a maximum length of 1 meter (39.37 inches), which is much greater than the recommended 18 inch cable for ATA drives. The reduced footprint of SATA data connections frees up space on motherboards, potentially allowing for more convenient layouts and room for more onboard features! A 15-pin power connection delivers the 250mV of necessary power to SATA drives. 15-pins for a SATA device sounds like it would require a much larger power cable than a 4-pin ATA device, but in reality the two power connectors are just about the same height. For the time being, many SATA drives are also coming with a legacy 4-pin power connector for convenience. Many modern motherboards, such as this Chaintech motherboard, come with SATA drive connections onboard (many also including the ATA connectors as well for legacy drive compatibility), and new power supplies, such as this Ultra X-Connect, generally feature a few of the necessary 15-pin power connections, making it easy to use these drives on new systems. Older systems can easily be upgraded to support SATA drives by use of adapters, such as this PCI slot SATA controller and this 4-pin to 15-pin SATA power adapter. Optical drives are also becoming more readily available with SATA connections. Drives such
CLICK HERE FOR TECH TIP ARCHIVES
SATA drives are easy to distinguish from their ATA cousins by the different data and power connections found on the back of the drives. A side-by-side comparison of the two interfaces can be seen in this PDF from Maxtor, and the following covers many of the differences Standard ATA drives, such as this 200GB Western Digital model, have somewhat bulky, two inch wide ribbon cable with 40-pin data connections and receive the 5V necessary to power them from the familiar 4-pin connection. The basic data cables for these drives have looked the same for years. A change was made with the introduction of the ATA-5 standard to better improve the signal quality by making an 80 wire cable used on the 40pin connector (these are commonly called 40pin/80-wire cables). To improve airow within the computer system some manufacturers resorted to literally folding over the ribbon cable and taping it into that position. Another recent physical change also came with the advent of rounded cables. The performance of the rounded cables is equal to that of the
geeks.com | techtipsblog.com
as the Plextor PX-712SA take advantage of the new interface, although the performance will not be any greater than a comparable optical drive with an ATA connection. Performance In addition to being more convenient to install and drawing less power, SATA drives have performance benets that really set them apart from ATA drives. The most interesting performance feature of SATA is the maximum bandwidth possible. As we have noted, the evolution of ATA drives has seen the data transfer rate reach its maximum at 133 MB/second, where the current SATA standard provides data transfers of up to 150 MB/second. The overall performance increase of SATA over ATA can currently be expected to be up to 5% (according to Seagate), but improvements in SATA technology will surely improve on that. The future of SATA holds great things for those wanting even more speed, as drives with 300 MB/second transfer rates (SATA II) will be readily available in 2005, and by 2008 speeds of up to 600 MB/second can be expected. Those speeds are incredible, and are hard to imagine at this point. Another performance benet found on SATA drives is their built-in hot-swap capabilities. SATA drives can be brought on and ofine without shutting down the computer system, providing a serious benet to those who cant afford downtime, or who want to move drives in and out of operation quickly. The higher number of wires in the power connection is partially explained by this, as six of the fteen wires are dedicated to allowing the hot-swap feature.
Price
Comparing ATA drives to SATA drives can be tricky given all of the variables, but in general it is the case that SATA drives will still cost just a bit more than a comparable ATA drive. The gap is closing rapidly though, and as SATA drives gain in popularity and availability a distinct shift in prices can be expected. Considering the benets of SATA over ATA, the potential difference of a few dollars can easily be justied when considering an upgrade. Final Words The current SATA standard provides signicant benets over ATA in terms of convenience, power consumption and, most importantly, performance. The main thing ATA has going for it right now is history, as it has been the standard for so long that it will not likely disappear any time soon. The future of SATA will be even more interesting as speed increases will help hard drive development keep pace with other key system components.
tech tips
brought to you by
geeks.com
geeks.com | techtipsblog.com
5 Simple Steps
No one wants their computer to be loud, but in order to keep components running at safe temperatures, cooling fans can wind up making the system sound like a blow dryer. In a busy ofce environment some noise may go unnoticed, but as computers nd their way into more rooms of the home, near silence is essential. A computer sitting in the living room for use with a home theater system has to be quiet so that it doesnt interfere with the enjoyment of music or movies, for example. Complete systems and high end components are available to combat computer noise, but this Tip will look at a few areas to quiet existing systems on a minimal budget. 1. Cooling Fans The bulk of all noise in a computer system is going to come from the cooling fans mounted on the case and from any heat generating components such as the processor. Cases generally employ 80mm fans with ball bearings to keep cool air owing. Two steps to reduce noise include increasing the fan size and choosing a fan with uid or sleeve bearings. If a 120mm fan can be installed where the 80mm fan presently
to a Quieter PC
Tech Tip 9 - Jason Kohrs
resides, a noise reduction can be achieved because the larger fan can move the same amount of air at a lower rotational speed. In general, the slower a fan moves the less noise it will make. The ball bearings on many fans are a source of vibration which in turn create noise. Selecting a fan with uid or sleeve bearings will greatly reduce the noise created, which is generally a good thing, except for one instance. Ball bearing fans can be counted on to get even noisier just before failure, letting you know when replacement is necessary. Fluid or sleeve bearings will just fail without such a warning which could jeopardize other system components. One other caveat to sleevebearing fans vs. ball-bearing fans is that sleeve-bearing fans generally tend to fail sooner than ball-bearing fans. A quality processor cooler is essential to keep a high powered system running cool, but it isnt always necessary to run the fan installed at
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
full speed. Some coolers, such as the Cooler Master Aero 4, include a simple fan speed dial that can be mounted either in the back or the front of the case for convenient adjustment. For those bold enough to run plumbing inside a computer, water cooling kits such as the Cooler Master Aquagate can take cooling performance and quiet operation to a whole new level. Many cooling fans will actually list the decibel level of the noise that they generate. The lower the number, the better. In practical terms, below 20 decibels (db) would be super quiet; 20 to 30 decibels, somewhat quiet; 30 to 40 decibels, somewhat noisy; and over 40 decibels, just plain noisy. 2. Cases The design of a case is a key factor in the systems cooling performance and noise generation. A case with ample ventilation is required to keep the components cool, and a few things can be done to achieve this without adding to the noise level. Of most interest is the availability of multiple fan mounting locations in a case, as well as the open area provided for the fans to move air. (not sure why there are quotation marks around this sentence.) Taking a look at the back of this Gladiator ATX Window Case shows that the user has the exibility to mount an 80mm fan, or opt for the previously described benets of a 120mm fan. But, the perforations provided for the air to pass through are somewhat restrictive, which could add to the noise level as the wind whistles through the small openings. This is nothing that someone handy with a Dremel couldnt remedy, but for those who dont want to cut up their case, compare the Z-Alien ATX Window Case to the Gladiator. There is much more open area for a 120mm fan to
pass the air without restriction. Along the same lines, but applied to other areas of the case, the X Blade ATX Window Case has a fairly open design on both the front grill and the side panel for 80mm fans to draw in cool air. Experimenting with the size, speed and placement of case fans can lead to a setup with adequate cooling and low noise production that might not be expected. It is possible for some cases to be cooled well with a single 120mm exhaust fan while leaving the other various fan locations empty. The noise will obviously be less with fewer fans running and if the temperatures are acceptable there is no need to use all of the fans just because they are there. 3. Fan Controllers Fan controllers are available in n u m e r o u s configurations, but they all serve the same function to allow a fan to run at something other than full speed. Just reducing a fans speed by 5-10% can have a noticeable impact on noise, but zero impact on cooling performance. Some fan controllers operate automatically, using a thermal sensor to vary the speed of the fan in direct proportion to the temperature sensed. This type is convenient as it requires no user interaction but eliminates any possibility of custom control. Manual speed controllers put all of the power in the users hands, generally with a dial that adjusts the fans speed by varying the resistance on the line powering it. The Cooler Master Cool Drive 4 is primarily a hard drive cooler, but it also serves the function of a four channel manual fan speed controller. From one digital control panel, up to four temperatures can be monitored, and the corresponding fans can be monitored and controlled to maintain a healthy balance between noise and temperature.
geeks.com | techtipsblog.com
4. Power Supplies The typical computer power supply features two 80mm fans to keep it cool, which will obviously also generate some noise. Fanless power supplies are now available that generate zero noise, but none have found their way to the shelves at Geeks.com. These fanless power supplies dont follow the guidelines of typical design and there are other ways to quiet a power supply without removing the fans all together. The MGE Vigor 450W Power Supply incorporates two ideas already discussed in other sections in order to reduce noise from the power supply. It features a larger 120mm fan to move more air with less speed and a fan speed control knob to allow the user to reduce the speed even more, if they desire. Some other companies, such as Clever Power (which Computer Geeks sells from time to time), specialize in making super quiet power supplies with a variable fan that automatically increases and decreases the spin of the fan depending on the systems power draw. 5. Noise / Vibration Isolators Products are available to reduce the vibration caused by system components, as well as to insulate the case to keep the noise from escap-
ing. Examples of some of these isolation products can be applied to many areas of a computer system and may drastically reduce the overall noise, no matter what components are installed. For the bottom of the computer case, rubber feet are available to replace the hard plastic ones generally found. Silicone gaskets can be installed between a power supply or case fan and the case to reduce the transmission of vibrations and the amplication of noise. If you want to keep the noise inside your case, there is even adhesive backed sound insulation that can be applied to the inside walls of a computer case. Final Words The number of components and accessories available to quiet a computer is overwhelming and growing daily as people become fed up with the noise from their vacuum cleaner I mean computer! Silencing a computer can be a costly endeavor, but taking a few relatively inexpensive steps can have a drastic impact on the noise produced by the common computer system. Before starting on any sound reduction upgrades, analyzing a system to pinpoint the areas in need of the most attention will help determine the best course of action and the best way to spend any money.
Geeks.com 1890 Ord Way, Oceanside, CA 92056 1.760.726.7700 Read more about Computer Geeks at our website: www.geeks.com Buy your desktop computers, notebook computers, refurbished computers, computer parts, and computer cases at the Computer Geeks.
geeks.com | techtipsblog.com
Bluetooth Basics
Tech Tip 10 - Jason Kohrs
Capabilities The FAQ on the Bluetooth.org website offers a basic denition: Bluetooth wireless technology is a worldwide specication for a smallform factor, low-cost radio solution that provides links between mobile computers, mobile phones, other portable handheld devices, and connectivity to the Internet. Just like 802.11 b/g wireless networking systems and many cordless telephones, Bluetooth devices operate on 2.4 GHz radio signals. That band seems to be getting a bit crowded, and interference between devices may be difcult to avoid. Telephones are now being offered on the 5.8 GHz band to help remedy this, and Bluetooth has taken its own steps to reduce interference and improve transmission quality. Version 1.1 of the Bluetooth standard greatly reduces interference issues, but requires completely different hardware from the original 1.0C standard, thus eliminating any chance of backwards compatibility. The typical specications of Bluetooth indicate a maximum transfer rate of 723 kbps and a range of 20-100 meters (65 to 328 feet depending on the class of the device). This speed is a fraction of that offered by 802.11 b
CLICK HERE FOR TECH TIP ARCHIVES
Bluetooth technology is nothing new, but in many respects it still seems to be more of a buzz word rather than a well understood, commonly accepted technology. You see advertisements for Bluetooth enabled cell phones, PDAs, and laptops, and a search of the Computer Geeks website shows all sorts of different devices taking advantage of this wireless standard. But, what is it? History Before getting into the technology, the word Bluetooth is intriguing all on its own, and deserves a look. The term is far less high tech than you might imagine, and nds its roots in European history. The King of Denmark from 940 to 981 was renowned for his ability to help people communicate, his name (in English)... Harald Bluetooth. Perhaps a bit obscure, but the reference is appropriate for a wireless communications standard. Another item worth investigating is the Bluetooth logo, shown above. Based on characters from the runic alphabet (used in ancient Denmark), it was chosen as it appears to be the combination of the English letter B and an asterisk.
geeks.com | techtipsblog.com
or g wireless standards, so it is obvious that Bluetooth doesnt pose a threat to replace your wireless network. Although it is very similar to 802.11 in many ways, Bluetooth was never intended to be a networking standard, but does have many practical applications. Practical Applications Browsing the Computer Geeks website shows a variety of products that take advantage of Bluetooths capabilities, from laptops and PDAs, to headphones and input devices, and even wireless printer adapters. Laptops, such as the Toshiba Tecra 9000, include an onboard Bluetooth adapter to allow the system to connect to any Bluetooth device right out of the box. For laptop or desktop systems that do not have an adapter built in, there are USB Bluetooth adapters, such as the Belkin F8T001. Bluetooth enabled PDAs, such as the HP iPAQ hx4700 , allow for convenient wireless synchronization and data transfer. Headphones can take advantage of Bluetooth for two purposes audio playback and mobile phone communications. Using something like the Logitech Mobile Headset with a Bluetooth enabled mobile phone allows anyone to go hands free, as well as wire free. Logitech, and other manufacturers, also produce input devices that eliminate wires thanks to Bluetooth. You can add a Bluetooth mouse to your system, such as the Logitech MX900, or both a mouse and keyboard using something like the Logitech diNovo Media Desktop. One advan-
tage that Bluetooth wireless keyboard/mouse combinations have over the standard RF wireless keyboard/mouse combinations is range. Where most standard RF keyboard/ mouse combinations have a range up to 6 feet; a Bluetooth keyboard/mouse combination will usually have a range of up to 30 feet. The HP JetDirect BT1300 Bluetooth printer adapter makes sharing a printer extremely convenient by eliminating the need for any wires or special congurations on a typical network. Printing to any compatible HP printer from a PC, PDA or mobile phone can now be done easily from anywhere in the ofce. Final Words At this point the popularity of Bluetooth might not be as large as some proponents would have hoped, but many devices are available for those interested. The cost and competition from other standards have hindered the widespread acceptance, but Bluetooth does offer a viable solution to many devices that might not have wireless connectivity without it.
geeks.com | techtipsblog.com
Basics of RAID
A couple of the recent Tech Tips have made mention of RAID, but the level of detail required in those tips didnt shed much light on what RAID actually is. The number of e-mail responses and comments in the Readers Digress section was convincing enough that an introduction to the basics of RAID would be an appropriate Tech Tip, so here it is. Introduction The word RAID sounds like it might describe something Marines conduct in Fallujah, or a can of what all roaches fear, but it is simply an acronym that stands for Redundant Array of Independent (or Inexpensive) Disks. Depending on who you talk to, the letter I can stand for either independent or inexpensive, but in my opinion independent is more appropriate, and far less subjective. RAID generally allows data to be written to multiple hard disk drives so that a failure of any one drive in the array does not result in the loss of any data, as well as increasing the systems fault tolerance. I say RAID generally does this, as there are several RAID congura-
tions that provide different approaches to redundancy, but some RAID congurations are not redundant at all. Fault tolerance refers to a systems ability to continue operating when presented with a hardware (or software) failure, as should be experienced when a hard drive fails in one of the redundant congurations of RAID. The Hardware The basic hardware required to run RAID includes a set of matched hard drives and a RAID controller. RAID can be run on any type of hard drive, including SCSI, SATA, and ATA. The number of hard drives required is dependent on the particular RAID conguration chosen, as described later. I mention the need for matched hard drives, and although this is not absolutely necessary, it is recommended. Most arrays will only be able to use the capacity of the smallest drive, so if a 250GB Hitachi drive is added to a RAID conguration with an 80GB Hitachi drive, that extra 170GB would
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
probably go to waste the only time that this doesnt apply is in a RAID conguration called JBOD Just a Bunch Of Disks; which really isnt a RAID conguration but just a convenient thing that a RAID controller can do see Basic RAID Congurations below for more information. In addition to matching capacities, it is highly recommended that drives match in terms of speed and transfer rate as the performance of the array would be restricted by the weakest drive used. One more area that should be considered while matching is the type of hard drive. RAID controllers are generally for either SCSI, SATA, or ATA exclusively, although some systems allow RAID arrays to be operated across controllers of different formats. The RAID controller is where the data cables from the hard drives are connected, and conducts all of the processing of the data, like the typical drive connections found on a motherboard. RAID controllers are available as add on cards, such as this Silicon Image PCI ATA RAID controller, or integrated into motherboards, such as the SATA RAID controller found on the Asus K8V SE Deluxe. Motherboards that include RAID controllers can be operated without the use of RAID, but the integration is a nice feature to have if RAID is a consideration. Even for systems without onboard RAID, the relatively low cost of add on cards makes this part of the upgrade relatively pain free.
Another piece of hardware that is not required, but may prove useful in a RAID array is a hot swappable drive bay. It allows a failed hard drive to be removed from a live system by simply unlocking the bay and sliding the drive cage out of the case. A new drive can then be slid in, locked into place, and the system wont skip a beat. This is typically seen on SCSI RAID arrays, but some IDE RAID cards will also allow this such as this product manufactured by Promise Technology. The Software RAID can be run on any modern operating system provided that the appropriate drivers are available from the RAID controllers manufacturer. A computer with the operating system and all of the software already installed on one drive can be easily be cloned to another single drive by using software like Norton Ghost. But it is not as easy when going to RAID, as a user who wants to have their existing system with a single bootable hard drive upgraded to RAID must start from the beginning. This implies that the operating system and all software needs to be re-installed from scratch, and all key data must be backed up to be restored on the new RAID array. If a RAID array is desired in a system for use as storage, but not as the location for the operating system, things get much easier. The existing hard drive can remain intact, and the necessary conguration can be made to add the RAID array without starting from scratch. Basic RAID Congurations There are about a dozen different types of RAID that I know of, and I will describe ve of
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
the more typical congurations, and usually offered on RAID controller cards.
the sum of individual drives. Using those same two 160GB Seagate drives from above in RAID 1 would result in a total capacity of 160GB.
is one of the congurations that does not provide redundancy, making it arguably not a true RAID array. Using at least two disks, RAID 0 writes data to the two drives in an alternating fashion, referred to as striping. If you had 8 chunks of data, for example, chunk 1, 3, 5, and 7 would be written to the rst drive, and chunk 2, 4, 6, and 8 would be written to the second drive, but all in sequential order. This process of splitting the data across drives allows for a theoretical performance boost of up to double the speed of a single hard drive, but real world results will generally not be nearly that good. Since all data is not written to each disk, the failure of any one drive in the array generally results in a complete loss of data. RAID 0 is good for people who need to access large les quickly, or just demand high performance across the board (i.e. gaming systems). The capacity of a RAID 0 array is equal to the sum of the individual drives. So, if two 160GB Seagate drives were in a RAID 0 array, the total capacity would be 320GB.
as the name may imply, is a combination of RAID 0 and RAID 1. You have the best of both worlds, the performance boost of RAID 0 and the redundancy of RAID 1. A minimum of four drives is required to implement RAID 0+1, where all data is written in both a mirrored and striped fashion to the four drives. Using the 8 chunks of data from the example above, the write pattern would be something like this Chunks 1, 3, 5, and 7 would be written to drives one and three, and chunks 2, 4, 6, and 8 would be written to drives two and four, again in a sequential manner. If one drive should fail, the system and data are still intact. The capacity of a RAID 0+1 array is equal to half the total capacity of the individual drives. So, using four of the 160 GB Seagate drives results in a total capacity of 320GB when congured in RAID 0+1.
is one of the most basic arrays that provides redundancy. Using at least two hard drives, all data is written to both drives in a method referred to as mirroring. Each drives contents are identical to each other, so if one drive fails, the system could continue operating on the remaining good drive, making it an ideal choice for those who value their data. There is no performance increase as in RAID 0, and in fact there may be a slight decrease compared to a single drive system as the data is processed and written to both drives. The capacity of a RAID 1 array is equal to half the capacity of
may be the most powerful RAID conguration for the typical user, with three (or ve) disks required. Data is striped across all drives in the array, and in addition, parity information is striped as well. This parity information is basically a check on the data being written, so even though all data is not being written to all the drives in the array, the parity information can be used to reconstruct a lost drive in case of failure. Perhaps a bit difcult to describe, so lets go back to the example of the 8 chunks of data now being written to 3 drives in a RAID 5 array. Chunks one and two would be written to drive one and two respectively, with a corresponding parity chunk being written to drive three. Chunks three and four would then be written to drives one
CLICK HERE FOR TECH TIP ARCHIVES
geeks.com | techtipsblog.com
and three respectively, with the corresponding parity chunk being written to drive two. Chunks ve and six would be written to drives two and three, with the corresponding parity chunk being written to drive one. Chunks seven and eight take us back to the beginning with the data being written to drives one and two, and the parity chunk being written to drive three. It might not sound like it, but due to the parity information being written to the drive not containing that specic bits of information, there is full redundancy. The capacity of a RAID 5 array is equal to the sum of the capacities of all the drives used, minus one drive. So, using three of the 160GB Seagate drives, the total capacity is 320GB when congured in RAID 5.
boost, just additional connections for adding more drives to a system. A smart thing that JBOD does is that it can treat the odd sized drives as if they are a single volume (thus a 10GB drive and a 30GB would be seen as a single 40GB drive), so it is good to use if you have a bunch of odd sized drives sitting around but otherwise it is better to go with a RAID 0, 1 or 0+1 conguration to get the performance boost, redundancy or both. Final Words Implementing RAID may sound daunting to those unfamiliar with the concept, but with some of the more basic congurations it is not much more involved than setting up a computer to use a standard drive controller. But, the benets of RAID over a single drive system far outweigh the extra consideration required during installation. Losing data once due to hard drive failure may be all that is required to convince anyone that RAID is right for them, but why wait until that happens.
is another non-redundant conguration, which does not really offer a true RAID array. JBOD stands for Just a Bunch Of Disks (or Drives), and that is basically all that it is. RAID controllers that support JBOD allow users to ignore the RAID functions available and simply attach drives as they would to a standard drive controller. No redundancy, no performance
Geeks.com 1890 Ord Way, Oceanside, CA 92056 1.760.726.7700 Read more about Computer Geeks at our website: www.geeks.com Buy your desktop computers, notebook computers, refurbished computers, computer parts, and computer cases at the Computer Geeks. Get FREE Geeks.com Tech Tips Newsletter, discounts, and more! Every week, we mail out the
latest Tip to our loyal Geekmail subscribers. If you would like to get the weekly Tech Tips Newsletter delivered to you by email for FREE, just join the geeks.com email list.
MMC and SD Flash memory is available in so many formats that it can be difficult to know what will work with any particular device. Devices such as MP3 players, PDAs, mobile phones, digital cameras, and personal computers can take advantage of flash memory to bolster their storage capacity, but selecting the right format may be easier said than done. To try to address all of the common formats in one Tech Tip might be quite a read, so we're doing a two part series on Flash Memory. Part I of the Flash memory series will focus on two similar, very popular and generally interchangeable formats: MMC and SD. The Basics Before getting into the details, some background on each card may be appropriate. The letters MMC' stand for MultiMedia Card, which is a format that was developed jointly by SanDisk and Siemens in 1997. The letters SD' stand for Secure Digital, and this format is an improvement on the original MMC design, and was developed jointly by SanDisk, Matsushita Electronics (better known as Panasonic) and Toshiba. Both formats are quite durable and the solid state (no moving parts) components are protected by a rigid plastic shell. The devices are generally unaffected by extreme temperatures, and should withstand a drop of 10 feet without experiencing any damage from shock. Physical Features Both MMC and SD flash memory units measure approximatly 24mm x 32mm x 2.1mm, about the size of a typical postage stamp, and weigh a mere 2 grams. This miniature footprint may make them about the easiest way to misplace your data, but also allows the devices that accept them to be smaller. Personal electronics are shrinking as they get more powerful, and the necessary accessories need to keep pace. Although they share the same basic form factor, MMC and SD cards can be distinguished by two physical features, a sliding tab and the number of connections. When looking at an MMC or SD card so that the label is facing you, and the electrical connections are facing away from you, there will be a notch in the upper right corner of the card. From this point of reference there will be a small sliding tab on the left edge of an SD card, not found on an MMC card. Compare this 512MB MMC card with this 512MB SD card and you can see the difference if you look closely at the enlarged images. This tab slides into two positions, locked and unlocked. It allows the user to
manually write protect the data on the card, which means with the tab in the locked position data can be read from the card, but nothing can be written to or erased from the card. The other physical difference is on the backside of the card. An MMC card features seven electrical connections (small rectangular pads for data transfer and receiving power), whereas an SD card has nine.
While there may be rare exceptions, for all practical purposes, SD & MMC cards may be used interchangeably on current devices, especially if they indicate "SD/MMC" compatibility.
Transfer Rate SD and MMC cards are capable of similar data transfer rates, with a slight edge going to the SD cards. SD cards are now available with write speeds rated at 60x (9 MB/s) and read speeds rated at 66x (10 MB/s), while MMC transfer rates seem to peak at 9 MB/s in either direction. Not much of a difference and both are quite fast, but end user results will vary and may not reach these speeds in real world use, regardless of format chosen. SD and MMC cards should reference a speed as part of the technical specification, and it is an important thing to consider when shopping around. Lower speed cards are still commercially available, and can have an impact on the performance of digital cameras or other devices where speed may be critical. Each x' in the speed rating represents 0.15 MB/s, so if 45x compared to 66x doesn't sound like a big deal to you, maybe putting it in terms of 6.75 MB/s compared to 10 MB/s will. Instead of actual speed ratings, some manufacturers will use words like High Speed or Ultra when referring to the faster cards. Note: Check the actual write speed specs of your device before purchasing "Ultra" or "High Speed' chips. You could be putting a Hemi engine in a AMC Gremlin. Don't spend the extra money if the camera does not support it. Capacity SD cards are readily available in sizes up to 1 GB, 2 GB models are starting to show up, and the SD Card Association states that models with up to 4 GB and 8 GB of storage capacity are also on the way. In contrast, MMC cards have a maximum capacity of 512MB, making the SD technology much more appealing. Security As mentioned in the physical features section above, SD cards offer the benefit of write protection. By locking' the card, a user can be assured that the data is secure until they take the necessary step to un protect it. Fears of accidentally losing or changing data can
be eliminated by using an SD card over an MMC card, thus improving the security of the data. Another feature supported by SD, but not MMC, involves copyright protection. The SanDisk web site refers to this feature as "cryptographic security for protection of copyrighted data", and other locations reference it as DRM, or Digital Rights Management. Basically, licensed content can be written to an SD card and it can not be executed except from that specific card. Applications In general, SD and MMC cards are interchangeable and either can be used in a compatible device. An SD card may generally cost more than an MMC card with the same capacity, but as seen in this Tech Tip, it does offer more for the money. Many card readers are available for personal computers that promote the ability to read and write to a variety of common flash media formats. A 15-in-1 reader/writer, such as this one, can be made quite compact thanks in part to the fact that two of the 15, MMC and SD, can be read from the same slot on the device. MP3 players generally come with a base amount of memory to store music files, but having an expansion slot allows users to increase the capacity, and play time, by adding flash memory of their choice. The Pogo RipFlash MP3 Player is such a device, providing 256MB onboard as well as an SD/MMC slot for easy expandability. Mobile phones and PDAs can also take advantage of increased storage space thanks to flash memory slots. The Handspring Treo 600 is a combination phone/PDA that offers an SD/MMC slot for such convenience. And of course, digital cameras use flash memory as their film', where larger and faster cards are always a welcome upgrade. The 6.1 MegaPixel Kodak DX7630 could fill up the same SD/MMC card much faster than the 3.2 MegaPixel Umax AstraPix 640, but one of the great things about these cards is that the user can choose the size, as well as the quantity to have on hand, in order to suit their particular needs and budget. One word of warning be sure to check your device for the capacity of the card that it can handle. If your camera can handle only up to a 512 MB card, then using a 1 GB card in the camera will be pointless (depending on the device, some will not even be able to read the card, whereas others will only use up to the capacity that it is rated for. Either way, you want to make sure that you match the card properly to the device). So as always, check your product's manual to be sure that you buy memory that it can support. Final Words MMC and SD are two of the more commonly used formats of flash memory, but as mentioned, there are several others. Keep an eye out for next week's Tech Tip: Part II of
the Flash Memory Series will address Compact Flash, Smart Media, Memory Stick, and xD formats.
Flash Memory, Part II In the first part of the series on flash memory, an overview was provided for two fairly common formats; MultiMedia Card and Secure Digital. In this installment, we will wrap things up with an overview of four other common formats: CompactFlash, SmartMedia, xD, and Memory Stick (yes, there are actually even MORE, but these six constitute the ones used most in the Flash Memory universe). It would be convenient for consumers if manufacturers could all agree on one format of flash memory, but dont hold your breath for that to happen! With flash memory being used in various devices such as MP3 players, PDAs, mobile phones and digital cameras, one can count on having as many choices in devices as in the memory required for them. This Tech Tip will cover some of the basics of each format mentioned, providing information on the history and technology of each. CompactFlash CompactFlash technology was developed by SanDisk in1994, making it one of the oldest flash memory formats currently in use. According to the CompactFlash Association (http://www.compactflash.org/), CompactFlash cards have the potential for capacities up to 137 GB and data transfer rates of up to 66 MB/s. But, current devices can realistically be expected to have capacities of up to 12 GB and data transfer rates of up to 16 MB/s. Both of which are still very impressive (and currently very expensive for the large capacity cards). Every CompactFlash card is 43mm wide and 36mm long, but depending on the type of card, they can have two different thicknesses. Type 1 CompactFlash cards are 3.3mm thick, Type 2 CompactFlash cards are 5.5mm thick, and these dimensions make the cards fairly large as compared to other flash memory. The connections for these cards are found at one end and feature two rows of 25 sockets that supply either 3.3V or 5V to the card (they can operate on either). This 1 GB SanDisk model (http://www.geeks.com/details.asp?invtid=SD-CF1GB-N&cat=RAM) is an example of a typical Type II CompactFlash card. The larger size of the CompactFlash cards may seem like a disadvantage, but it is necessary for one of the main advantages. It is the only format of flash memory where the controller is actually onboard, making it more universally compatible and capable of increased performance by unloading the processing burden from slower devices that it may interface with. The thickness of the cards can also be considered a bonus for two other reasons. There is plenty of space inside for large capacity high density memory modules, and the longevity of the device may be increased since they are more rugged than many other form factors. Microdrives are a separate type of compact storage device first developed by IBM, but they share the same interface and general dimensions as a Type 2 CompactFlash card (Microdrives actually have teeny-tiny spinning discs in them they are not solid state flash memory like CompactFlash). Computer Geeks sells a 2.2 GB Microdrive by MagicStor (http://www.geeks.com/details.asp?invtid=GS1022C-N)
SmartMedia SmartMedia was first developed by Toshiba, and the technical name for it is actually Solid-State Floppy-Disk Card (or SSFDC for short). Just as CompactFlash has a group backing it, Smart Media is promoted by the SSFDC Forum (http://www.ssfdc.or.jp/english/). All SmartMedia is 37 mm wide by 45 mm long by about 0.25 mm thick, with a notch found in one corner, and exposed golden contacts on the back side. At less than 1mm thick, SmartMedia is easily the thinnest of the flash memory formats. The maximum capacity one can expect to find for SmartMedia is a mere 128 MB, making it a less than appealing solution for modern mass storage. SmartMedias popularity has been on the decline for years as more powerful technologies have emerged to replace it. Computer Geeks stocks 128 MB (http://www.geeks.com/details.asp?invtid=SM128) and 64MB (http://www.geeks.com/details.asp?invtid=SM64) SmartMedia cards as well as a couple of adapters that let you use a SmartMedia card in a CompactFlash (http://www.geeks.com/details.asp?invtid=SMTOCF) or PCMCIA (notebook) (http://www.geeks.com/details.asp?invtid=SMTOPCMCIA) slot.
The extremely low profile is in part achieved by the lack of an onboard controller, and by the fact that SmartMedia is basically just memory modules embedded in a plastic card. The controlling is conducted by the device using the memory, which is how all flash memory but CompactFlash operate anyway. Early SmartMedia cards operated on 5V, but the current standard uses 3.3V. Older 5V cards can not be used in 3.3V SmartMedia devices, so it is important to know the difference between them. Holding a SmartMedia card so the exposed electrical contacts are facing you and positioned at the top of the card, if the notch is on the left it operates on 5V, if the notch is on the right it operates on 3.3V. This notch also prevents one type of card from being fully inserted into a device that is not designed to accept it. xD The xD (eXtreme Digital) format was launched by Fujifilm and Olympus in 2002, and is promoted by the group at the official xD-Picture Card website (http://www.xd-picture.com/). With a complete name of xD-Picture Card, this format was intended solely for use with digital cameras, although it did find applications elsewhere. Fujifilm and Olympus were two of the biggest supporters of SmartMedia, and the launch of xD was a pretty good sign that the future of SmartMedia was limited. Each xD card measures a mere 20mm by 25mm by 1.7mm, making them smaller in overall size than even SD and MMC cards. The maximum capacity of xD cards is expected to be 8 GB, but typical cards can be expected up to 1 GB in size. The read speeds of xD cards is up to 5 MB/s, while write speeds can be up to 3 MB/s, making them fast, but not the fastest. xD cards also operate on 3.3V, and are promoted not only for their minimal size, but for their low power consumption. This 128MB Olympus model (http://www.geeks.com/details.asp?invtid=O-XD-PC128-N&cat=RAM) is an example of a typical xD card. Memory Stick Memory Stick flash memory was first launched back in 1998, and although it has the support of many manufacturers, it seems to be most prominently used in Sony brand devices, including digital audio devices, cameras and even televisions. Memory Stick is promoted by the group at the official Memory Stick website (http://www.memorystick.com/), which has a good deal of information about the media and applications for it.
Memory Stick flash memory looks a bit like a stick of gum, but slightly smaller, measuring about 50 mm by 21.5 mm by 2.8 mm. Current models can be expected with capacities of up to 2 GB, and Memory Sticks with capacities of 4 GB to 8 GB may be available soon. According to the Memory Stick website, maximum (theoretical) data transfer rates of up to 160 Mbps can be expected, although real world results will most definitely be lower. Expect a memory Stick to actually provide read speeds of up to 2.45 MB/s, and write speeds of up to 1.5 MB/s. Memory Sticks come in four flavors (so to speak) the original Memory Stick, Memory Stick PRO and Duo versions of each. Memory Stick PRO offers faster speeds and larger capacities over the original Memory Stick. The Duo modules are smaller and actually use an adapter to fit into Memory Stick slots. Note that not all devices that take Memory Sticks can use Memory Stick PRO modules be sure to check your manual. This 256MB SanDisk model (http://www.geeks.com/details.asp?invtid=SDMSV-256-A10N&cat=RAM) is an example of a typical Memory Stick card. Final Words Say you have devices that require you to have some or all of these different types of flash memory Interfacing with all of them could be inconvenient if not for devices such as this 15-in-1 flash memory card reader that Computer Geeks sells (http://www.geeks.com/details.asp?invtid=USB20-18-IN-1). A compact device (1/2 x 4 x 2-1/4) such as this is all that is necessary to read and write to any of the cards covered in these two Tech Tips and then some. Flash memory can be a bit confusing for such a small, seemingly simple device. Six of the more common formats have been covered in this series of Tech Tips, and hopefully the background information and basics of each technology can help users make purchasing decisions with more confidence.
Motherboard Basics
A recent Tech Tip covered the basics of selecting a computer case (http://geeks.com/pix/techtips-122304.htm) and made mention of the various sizes that correspond to motherboards of different form factors. A few people wrote in expressing interest in understanding more about the basics of motherboards, and thats exactly what this Tech Tip intends to address. A motherboard, also known as a main board, is the primary circuit board inside of a computer, and is where the central processing unit (CPU), memory, expansion slots, drives, and other peripheral devices are connected. The circuitry on a motherboard facilitates the communication between all of the devices in the computer, making them as critical to a systems performance as items such as the CPU or memory. The core circuitry of a motherboard is referred to as its chipset, and generally the manufacturer of the motherboard is not the manufacturer of the chipset. Intel does produce motherboards with their own chipsets, but buying a motherboard brand such as Gigabyte, Biostar, and ASUS means getting a board with either a VIA, Nvidia, SIS, or Intel brand chipset. 1. Form Factor The different basic shapes and sizes of motherboards are categorized as form factors. There are several standard form factors available, but some of the more common ones found in desktop computers include: (http://www.formfactors.org/developer/specs/atx2_2.pdf), ATX (http://www.formfactors.org/developer/specs/matxspe1.2.pdf), Micro ATX (mATX) (http://www.formfactors.org/developer/specs/FlexATXaddn1_0.pdf) FlexATX (http://www.via.com.tw/en/initiatives/spearhead/mini-itx/) and Mini-ITX The basic sizes of each are as follows: ATX: 12" x 9.6" (305mm x 244mm) Micro ATX: 9.6" x 9.6" (244mm x 244mm) FlexATX: 9.0" x 7.5" (229mm x 191mm) Mini ITX: 6.7" x 6.7" (170mm x 170mm)
ATX and mATX are by far the most popular motherboard sizes for desktop computers, and as seen in the list above, are also some of the largest. More real estate on a motherboard allows for greater expansion possibilities and extra features, which make the use of these boards more flexible. A Mini-ITX board may feature just one slot for memory and one slot for an expansion card, while a typical ATX board may feature 4 memory slots and six slots for expansion cards. Each form factor has its own niche that it fits into, from workstations and gaming systems for larger boards to media centers and in-car computers for smaller boards. There is definitely overlap between the potential applications of each form factor, and other features and capabilities will also influence the targeted use. 2. CPU Socket The major processor manufacturers, AMD and Intel, are constantly waging a battle to offer the fastest, most powerful processors available. Getting more speed and performance out of a relatively small chip generally
requires a change to the physical dimensions as each new generation of processor is released. Therefore, motherboards need to evolve at the same pace in order to accept the new CPUs. Back in the day, AMD and Intel processors shared a common CPU socket, but those days were short lived. AMD and Intel have since been traveling down their own, relatively parallel, paths of performance and speed increases, while using different designs. Selecting a motherboard for a modern AMD processor eliminates the use of any Intel processor, and vice versa. AMDs current offering for desktop processors includes the Athlon 64, which is available in Socket 939 (http://www.geeks.com/products.asp?cat=MBB#Socket939Motherboards) and Socket 754 (http://www.geeks.com/products.asp?cat=MBB#Socket754Motherboards) formats. The number in the names represents the number of pins present on the backside of the CPU that connect to the motherboards socket. The Socket 939 Athlon 64 therefore has a staggering array of nine hundred and thirty nine tiny pins to match up with the motherboards socket. The Chaintech VNF4 Ultra (http://www.geeks.com/details.asp?invtid=VNF4ULTRA-N&cat=MBB) is an example of a Socket 939 motherboard based on Nvidias NForce4 Ultra chipset technology. In addition to these two sockets, many AMD processors, including Athlon XPs, Semprons, and Durons, share the Socket A format (http://www.geeks.com/products.asp?cat=MBB#SocketAMotherboards), also known as Socket 462 thanks to it having 462 pins for connecting to a motherboard. Intels latest offering for their Pentium 4 and Celeron processors, LGA 775 (http://www.geeks.com/products.asp?cat=MBB#Socket775Motherboards), doesnt have pins at all and basically swaps the pins to the motherboard for the socket. Perhaps this design move puts the burden of bent pin warranty claims on someone else, but it is fairly unique. The Biostar P4M80-M7 (http://www.geeks.com/details.asp?invtid=P4M80-M7&cat=MBB) is an example of an LGA 775 motherboard based on the VIA P4M800 chipset. Other Intel processors still on the market utilize the Socket 478 format (http://www.geeks.com/products.asp?cat=MBB#Socket478P4Motherboards) for Pentium 4 and Celeron processors. Although most motherboards support just one CPU socket, some applications benefit from having more than one processor to tackle the tasks at hand. Servers and high end workstations are two examples where a dual processor system, such as could be run on the Tyan Thunder i7500 (http://www.geeks.com/details.asp?invtid=S2720GN-2-XEON18&cat=MBB) motherboard, might make light work of more advanced applications. 3. Components Components is a fairly vague term to describe this section, but the items to be covered are fairly diverse. Computer systems all use memory, storage devices, and power supplies, but among the many differences motherboards have is the type and quantity of connections for these components. Most modern systems use DDR memory (http://www.geeks.com/products.asp?cat=RAM#184pinDDRDIMMMemory), but DDR-2 memory (http://www.geeks.com/products.asp?cat=RAM#240pinDDR2DIMMMemory) is becoming more common and will eventually become the standard. Although some boards provide slots for both types of memory, it is generally the case that either one or the other technology is supported. Besides operating differently, the physical difference of DDR having 184 pins and DDR-2 having 240 pins prevents them from being interchangeable. Going forward, users will have to decide whether they want to jump on the new technology bandwagon when selecting a motherboard, or to try to continue using their existing DDR for as long as possible. Regardless of technology, most motherboards come with 2 to 4 slots for memory, although as mentioned, Mini-ITX boards may just offer 1 slot. Hard drive technology is changing too, as mentioned in the Tech Tip comparing SATA to ATA hard drives (http://geeks.com/pix/techtips-010605.htm). Most motherboards over the past few years have offered two ATA connections, which could support up to 4 drives. With SATA becoming more popular, some boards now offer a mix of ATA and SATA connections, while others have abandoned ATA all together, and instead offer multiple SATA connections which only support one drive each. In addition to type and quantity, motherboards can also offer choices in hard drive capabilities by integrating RAID (http://www.geeks.com/pix/techtips-012705.htm)
controllers onboard, as found on the ASUS K8V SE Deluxe (http://www.geeks.com/details.asp?invtid=K8VSEDELUXE&cat=MBB). As systems become more advanced, they many times impose special power requirements to keep them running smoothly. Most motherboards feature the typical 20 pin ATX power connector, while some server boards may have a 24 pin connection in its place. Motherboards for AMD Athlon 64 and Pentium 4 processors will have a second power connection located in close proximity to the CPU socket for providing the extra power that todays high end processors demand. This special 4 pin connection isnt found on every AMD Socket A motherboard, but it will most definitely be located on an AMD Socket 939 motherboard. Power supplies have been including this special connection for years, but for those upgrading an old system with a new motherboard, the power supply may be just one more item that has to be upgraded as well. 4. Extra Features Many motherboards now include features onboard that were once only available as expansion cards to be purchased separately. A typical motherboard will now include stereo sound capabilities, a 10/100 LAN connection, and a few USB 2.0 ports on the back panel connection. Depending on the budget and needs of the end user, many motherboards may also include other convenient features such as integrated Firewire ports, VGA connections, and onboard RAID controllers. Although many of these items may be added later with expansion cards, if you know you want them upfront, a bit of installation hassle and expense can be eliminated by finding a board with just about everything you want included. That said, there arent many choices of onboard components, so its a case of take it or leave it. For example, you may want stereo sound included, but find most motherboards offer 5 channel, where you would prefer 8 channel. In that case, it may be a good thing that motherboards include expansion slots to add the sound card of your choice. 5. Expansion Slots A motherboard typically provides at least one slot for the installation of a graphics card (http://www.geeks.com/products.asp?cat=VCD) and a few slots for expanding the capabilities of the system in other areas. Graphics cards are available in PCI, AGP, and now PCI Express formats, and matching a motherboard to the appropriate card is a key step. Most motherboards released over the past few years include an AGP slot, and the new wave of motherboards are now starting to feature PCI Express slots for graphics card installation. PCI slots are found on most motherboards, but are much slower than AGP and PCI Express slots, so they are not the optimal choice for graphics. ATX motherboards may typically feature four to five PCI slots, and although they could be used for secondary display graphics cards, more common applications include sound cards, network cards, RAID controllers, TV tuners, modems, and USB/Firewire controllers. Considering that many of these items are now included onboard, having multiple PCI slots isnt quite as important as it used to be. 6. Style With enthusiasts adding windows and special lighting effects to just about every feature of a computer, why should the motherboard be left out of the action? Long gone are the days of the stereotypical green PCB with white connectors, and now most boards feature a vibrantly colored PCB and a rainbow of colors on expansion slots, memory slots, drive connectors, and so on. For example, if someone was undecided on a mATX board for their Socket 754 AMD Athlon 64, style might be the deciding factor. The Chaintech MK8M800 (http://www.geeks.com/details.asp?invtid=MK8M800&cat=MBB) and the Biostar K8VGA-M-N (http://www.geeks.com/details.asp?invtid=K8VGA-M-N&cat=MBB) are similar boards featuring the VIA K8M800 chipset and prices under $70. The golden PCB with black and white
features of the Chaintech board may appeal to some, while the red, white, blue, and yellow of the Biostar may sway others. In general, a particular model is only available in one color scheme, and many manufacturers use the same theme across their entire current line up. As an example, the Biostar board for AMD Athlon 64 processors above features the same basic style as this Biostar board for the new Pentium LGA 775 processors (http://www.geeks.com/details.asp?invtid=P4M80-M7&cat=MBB). In addition to coloring, some manufacturers will include LED lighting on chipset cooling fans, or accessorize motherboards with matching cables to complete the unique looks of the board. Some people may scoff at colors being included in the list of key features on motherboards, but there will be some that shop for style first, and then performance. Final Words There are many factors to address in selecting a motherboard, and this Tech Tip really just scratched the surface of the basic choices that may need to be considered. Much more technical decisions may need to be made by the advanced user, but covering the six basic areas discussed above is a good start for users of any level.
Expansion Cards Part 1: PCI The expansion slots available on motherboards allow for a variety of upgrades in a computer system, but matching the appropriate card to an available slot needs to be addressed before making any purchasing decisions. The most common types of expansion cards for modern computer systems can be broken down into three formats: PCI, AGP, and PCI Express. Each of these formats will be addressed separately in this three part series of Tech Tips, starting with PCI. The letters PCI stand for Peripheral Component Interconnect, and is the term used to describe a bus that connects components directly to the systems memory and to the systems processor through the frontside bus. When discussing communications on a motherboard, the term bus has nothing to do with the big yellow thing that takes the kids to school. There may be several buses in a computer, and like the PCI bus, they are all responsible for managing the communication traffic from different devices to the processor. The frontside bus is a high speed connection that manages the processors communication with items such as hard drives, memory, and PCI devices, while not burdening the processor with all of the management responsibilities. First developed by Intel in the early 1990s, PCI was spawned from even earlier (and slower) bus architectures such as ISA (Industry Standard Architechture) and VL-Bus (VESA Local), which were common back in the 1980s and 1990s. The original specifications for the PCI Bus had a speed of 33 MHz, with a 32-bit bus width, and a maximum bandwidth of 132 MB per second. There have been a few revisions to the PCI standard which have significantly increased these specifications, taking it to 66 MHz, 64-bit, and 512 MB per second, respectively. The 32-bit and 64-bit versions have different physical features, and most motherboards only offer 32-bit connections. The original power specification had PCI devices operating on 5V DC, and with the revisions came the capability for devices to continue using 5V, as well as now being able to operate on 3.3V DC. A simple explanation of 32-bit and 64-bit can be had by continuing the analogy of buses and traffic. Think of each bit as a lane of traffic on the communication path. Think of a 32-bit bus as having 32 lanes of traffic, and a 64-bit bus having 64 lanes of traffic. Just as a greater number of cars can travel simultaneously on a road with more lanes, more data can be transferred on a bus with a larger bit count. Motherboards can support multiple slots sharing one PCI Bus, and although not particularly common, can include more than one PCI bus. Depending on the form factor size of the motherboard, and other features that may be taking up space on the board, one can expect to have one to six PCI slots on a typical motherboard. For example, the mATX format Chaintech MK8M800 VIA K8M800 Socket 754 Motherboard (http://www.geeks.com/details.asp?invtid=MK8M800&cat=MBB) features just two 32-bit PCI slots, while the ATX format AOpen AX4GE Max Intel 845GE Socket 478 (http://www.geeks.com/details.asp?invtid=AX4GEMAX&cat=MBB) features six 32-bit PCI slots. A 32-bit PCI card features 124 pins for mating with a slot on a systems motherboard, and will fit into either a 32-bit or 64-bit slot (although data transfer will be 32-bit in either type of slot).
32-Bit PCI Card A 64-bit PCI card features 184 pins for mating with the appropriate slot on a systems motherboard, but can generally fit into a 32-bit slot as well, as long as features on the motherboard do not interfere. When installed in a 32-bit slot, data transfer on a 64-bit card will be limited to 32-bit.
64-Bit PIC Card The Intel STL2 Dual Socket 370 Server Board (http://www.geeks.com/details.asp?invtid=G7ESZ-BULKR&cat=MBB) is a good reference for comparing 32-bit and 64-bit PCI slots. Looking at the lower left corner of the motherboard shows four 32-bit PCI slots and two 64-bit PCI slots. Subsequent installments in this series of Tech Tips will look at AGP and PCI Express, each of which has its own unique physical features. Although the different format PCI cards may be interchangeable, PCI, AGP, and PCI Express cards do not work (or fit) in any other type of slot. Most PCI cards will be of the 32-bit variety, and the selection of items available is fairly extensive. Graphics cards, sound cards, network cards, RAID controllers, TV tuners, modems, and USB/Firewire controllers are all common items that may be added to a system through the use of a PCI card. Many of the items listed in the previous paragraph can be found integrated on modern motherboards, but these onboard devices offer no upgradeability. PCI devices provide plug and play installation, allowing a user to install (or remove) a device with ease. For example, an inexpensive 2-channel sound card (http://www.geeks.com/details.asp?invtid=AU8810&cat=SND) may be good enough for someone initially, but down the road they may decide that something like the 7.1 channel Sound Blaster Audigy 2 (http://www.geeks.com/details.asp?invtid=SB0350-1&cat=SND) offers the sound quality they really want. Upgrading is a matter of powering down the system, swapping the cards, rebooting, and installing the new software/drivers (OK, perhaps a bit over simplified). The good thing about PCI cards is that, even if you do have a board with built-in feature (such as built-in sound mentioned above), your motherboards BIOS will usually lets you disable that feature if you did want to add an upgraded card (such as the Audigy sound card mentioned in the example above), or the card can complement the feature already built-in (such as an IDE RAID card). The one area that drove the development of AGP is the performance of PCI based graphics cards. The demands of fast-paced video games, and other graphically intensive applications, require a great deal of bandwidth, which just wasnt available on the PCI Bus. Considering that all of the devices on the PCI Bus share the bandwidth available, an even faster, dedicated bus was required to handle just the graphics data. PCI graphics cards (http://www.geeks.com/products.asp?cat=VCD#StandardPCI) are still available though, and make for an easy way to add a second display to a system currently operating on an AGP or PCI Express graphics card. Final Words The PCI slot has been around for a while, and seems to have a place in at least the near future of computer architecture. AGP and PCI Express offer performance benefits that the PCI standard cannot match, but for many applications, the performance offered by PCI is more than adequate. Be sure to check out the next Tech Tips in this series for the basics of AGP.
Expansion Cards Part 2: AGP The first in this series of Tech Tips on expansion cards took a look at the PCI slot, and the variety of devices that may find their home in one. Graphics cards are one of the many items that may be used in a PCI slot, but the demands of fast-paced video games require more speed and greater bandwidth than the PCI Bus can provide. Thus, the AGP slot was born, providing a dedicated interface to transfer graphics data only. The letters AGP stand for Accelerated Graphics Port, and it is the term used to describe a dedicated, point-topoint interface that connects a video card directly to the systems memory and processor. AGP was first introduced by Intel in 1996, and is based off of their previous work in developing the PCI bus. Despite being based on PCI technology, the AGP and PCI slots on a motherboard are not interchangeable, so an AGP card can not be installed into a PCI slot, and vice versa. The initial release of AGP saw a sizeable performance boost over PCI, and the few revisions to the standard helped increase this even more as years went by. Other than having a dedicated path to the systems memory and processor, several other design features help AGP outperform PCI when it comes to graphics performance. Three of the other advancements: pipelining, side band addressing and graphics address remapping table are described below. Data transfer is improved through pipelining, a term used to describe the ability of an AGP graphics card to receive, and act upon, multiple instructions simultaneously. PCI data transfers require each piece of necessary information to be received separately before acting on any of it. Something called side band addressing also provides AGP with a performance boost. Basically, additional lines of data are included with each packet to instruct the system as to where this data is to be used. PCI data transfers do not have this addressing information, and the system must look at the data itself in order to determine its destination. This is an obvious time saver, as well as a resource saver since the processor doesnt have to analyze all data just to determine the address. AGP allows the operating system to store texture maps (http://webopedia.com/term/t/texture.html) in the systems memory which allows for more space, and perhaps faster access, rather than being limited to the use of graphics card memory only. Graphics art address remapping table, also known as GART, is a term used to describe a process that maps physical memory as virtual memory for the storage of texture maps. Basically, GART takes the system memory it is allowed to use to store texture maps and re-addresses it so that the system thinks these maps are now actually being stored in the frame buffer, or virtual memory. This might not sound like anything special, but this re-addressing requires that the texture map be written to memory only once and it is locked into place right where the AGP card can find it quickly. AGP can be broken down into different groups based on revisions to the specification (AGP 1.0, AGP 2.0, and AGP 3.0), as well as by the general speeds (1x, 2x, 4x, and 8x). There is overlap between the various categories, with AGP 1.0 supporting 1x and 2x, AGP 2.0 supporting 1x,, 2x, and 4x, and AGP 3.0 supporting 4x and 8x. For a complete break down of all the combinations available, please visit this page (http://www.interfacebus.com/Design_Connector_AGP.html). Before taking a look at the specifications of AGP, lets have a refresher as to what was available on PCI prior to the birth of AGP. The standard PCI bus has a width of 32-bit, operates at 33 MHz, provides a maximum bandwidth of 132 MB/s (which has to be shared by all devices connected), and operates on 3.3V (or 5V on the original standard).
The first version released was AGP 1.0 with a speed of 1x, which offered the following specifications: 32-bit bus width, operating at 66 MHz, providing a maximum bandwidth of 266 MB/s, and utilizing 3.3V. So, it can be seen that right out of the gate, AGP offered double the bandwidth of PCI. Each speed increase over 1x provided double the bandwidth as well as double the clock speed through the use of special signaling. So, AGP 2x offers a maximum bandwidth of 533 MB/s at a speed of 133 MHz, AGP 4x offers a maximum bandwidth of 1066 MB/s at a speed of 266 MHz, and AGP 8x offers 2.1 GB/s at a speed of 533 MHz. Given the timeline of the evolution of these cards, AGP 8x cards dominate todays marketplace. Finding some cards that are backwards compatible is possible, but the tricky part may be ensuring that the slot on the motherboard will accept them. Comparing the connector on this 128MB Apollo GeForce FX6600 GT card (http://www.geeks.com/details.asp?invtid=AGP6600GT-128MB&cat=VCD), to the connector on this 64MB Hercules 3D Prophet Ultra II card (http://www.geeks.com/details.asp?invtid=AGP-3DPIIU-64MBTV&cat=VCD), and to the connector on this 256MB Chaintech GeForce FX5200 card (http://www.geeks.com/details.asp?invtid=SA5200-N&cat=VCD) shows that the first one is obviously different than the second two. The Apollo card is 8x only, the Hercules card is 4x/2x compatible, and the Chaintech card is 8x/4x, which results in different notches in the connector. AGP 1.0 only features a 3.3V connection, the release of AGP 2.0 saw the availability of both a 3.3V and 1.5V connector, and AGP 3.0 uses the same 1.5V, but only requires 0.8V for signaling. In order to protect cards of different voltages/formats, special keyed connectors were designed so that only the correct card could be installed on any motherboard. A universal connector was eventually released for AGP 1.0/2.0 which allowed cards of either voltage to be installed. For a schematic of the various connectors, please visit this page (http://www.motherboards.org/imageview.html?i=/images/articles/tech-planations/920_p5_1.jpg). Although AGP 3.0 can share in the use of a universal connection, many motherboards now only support 4x/8x cards based on the AGP 3.0 standard. Another specification for AGP was released between 2.0 and 3.0, and was referred to as AGP Pro. AGP Pro was intended to be the new standard to meet the demands of high powered graphics workstations, but it never really garnered widespread acceptance. Speeds of 1x, 2x, and 4x were supported with AGP Pro, and it utilized either a 3.3V, 5V, or a universal connector, similar to AGP 2.0. But, the AGP Pro connector was not the same size as the standard AGP connector (see schematic at link above), meaning there were now three more possible connections to consider. An AGP Pro connection is longer than a standard AGP connection, and depending on the connector type, it could accept AGP 1.0 and 2.0 cards. Modern motherboards supporting AGP will specify what type of card is compatible with the board, so the guess work is eliminated when trying to match one with the other. For example, this Socket 754 Chaintech motherboard (http://www.geeks.com/details.asp?invtid=MK8M800&cat=MBB) specifies that it has one AGP 4x/8x slot and this Biostar LGA 775 motherboard (http://www.geeks.com/details.asp?invtid=P4M80M7&cat=MBB) specifies that it has one 8x AGP slot. Final Words The AGP slot provided a much-needed boost to graphics cards as compared to the PCI slot, but game developers still managed to push the capabilities of this more powerful format to the edge. Something even faster was needed, and the next Tech Tip will take a look at that something in PCI Express. PCI Express is not only destined to be the successor to AGP 8x, but due to its flexibility, perhaps to PCI as well.
Expansion Cards Part 3: PCI Express In the first two installments of this series of Tech Tips, we took a look at PCI and AGP, undoubtedly the most common expansion slots in a computer today. With a few key improvements over both of these, PCI Express is destined to replace both and offer a whole new level of computer performance. As with AGP and PCI, the development of PCI Express can be attributed to Intel. This time, however, they partnered with some other heavy hitters in the industry, such as Microsoft, IBM, and Dell. Although it is now known as PCI Express, that was not their initial choice for its name. If it wasnt for PCI-SIG (http://www.pcisig.com/home), the committee that oversees the PCI standard, we might be referring to this new format at 3GIO (Third Generation Input / Output). PCI Express development finds its roots in the PCI and AGP standards, but the physical connections are not interchangeable, and we will see that this is not the only difference. In the PCI standard, data from the various devices travels over a shared bus to the system. In the AGP standard, a dedicated, point-to-point interface transmits the data from the graphics card to the system. The PCI Express approach to data transfer involves a collection of two-way, serial connections that carries data in packets, similar to the way a network connection operates. The data from a PCI Express device will no longer have to travel over a single bus, or a single dedicated connection, but can use a combination of these two-way serial connections to optimize throughput. The terms lane and link dont sound like anything overly technical, but take on special meaning with PCI Express. A link is the physical connection between PCI Express devices, which can consist of multiple lanes that transmit and receive data independently. Links can be composed of 1, 2, 4, 8, 12, 16, or 32 lanes, and the configuration allows flexibility in assigning just as many lanes as needed to any particular device. There are obvious benefits to this approach, and a few of the more significant include the following points Each lane of PCI Express communication is dedicated between two points, so there is no sharing of bandwidth. PCIs main bottleneck was that all the devices were sharing the equivalent of one lane, and all of the available bandwidth also had to be shared. Multiple lanes can be assigned to devices whose performance would benefit from the extra speed and bandwidth. A PCI Express graphics card (http://www.geeks.com/products.asp?cat=VCD#PCIExpress(PCIe)) might be assigned 16 lanes (also referred to as x16), while a network adaptor might be assigned just 1 lane. Each lane you make available to a device increases the potential for performance, as the data is sequenced up/down each available lane to optimize throughput. This process of sending the next byte of data down the next available lane is referred to as data striping, and obviously more lanes are better for instances where a good deal of data needs to be transmitted quickly. Speaking of graphics cards, another benefit is that multiple high performance graphics cards can be installed on one motherboard. The flexibility of PCI Express allows for two x16 PCI Express slots to be included for dual graphics cards, something that in the past required one AGP slot and one PCI slot. And due to the performance limitations, the AGP and PCI combination could not really be considered high performance. In addition to two x16 slots allowing for dual display operation, when incorporating specific graphics cards on a motherboard supporting nVidias SLi technology (http://www.nvidia.com/page/nforce4_sli.html), the resources of the two separate cards can be bridged together for even greater performance on one display. An example of such a motherboard can be seen in DFIs LAN Party UT nF4 SLi-D (http://www.dfi.com.tw/Product/xx_product_spec_details_r_us.jsp?PRODUCT_ID=3469&CATEGORY_TYPE= LP&SITE=US).
Just as motherboards supported both AGP and PCI as a means of allowing dual displays, some motherboards offer both an AGP slot and a PCI Express slot. Not only does this allow the user the ability to run dual displays, it provides the added benefit of allowing an upgrade to be completed in stages. If a new PCI Express capable motherboard was just purchased, perhaps in addition to a new processor, the budget conscience user may not want to spring for a new graphics card right away. By making an AGP slot available on boards such as the ECS 915P-A (http://www.ecsusa.com/products/915p-a.html), there is no reason to retire a perfectly good AGP card just because one bought a new motherboard supporting PCI Express. PCI Express graphics cards are quite similar to AGP cards, except for the connector configuration. The physical size and layout are comparable, and even the prices are not that different. The current selection of graphics cards at Geeks.com (http://www.geeks.com/products.asp?cat=VCD) doesnt allow you to compare apples to apples in any one card, but one may find many of the same AGP cards available in PCI Express format for roughly the same price (or for even less money). For the time being, the markets seem to be running in parallel, but in time a shift will occur in favor of PCI Express dominating the market. Minimizing the cost involved in motherboard fabrication could be another benefit. Lets look at the example of a network adaptor requiring just 1 lane to operate. If this was a PCI based network adaptor, traces for the standard 32-bit bus would need to reach this device, instead of the four traces required for 1 PCI Express lane. Motherboard design will obviously weigh heavily on this benefit ever being realized, and it is possible that higher-end boards might actually require more traces. Before taking a look at the ultimate benefit of PCI Express, the performance, lets have a refresher on the capabilities of PCI and AGP. The standard PCI bus has a width of 32-bit, operates at 33 MHz, and provides a maximum bandwidth of 132 MB/s (which has to be shared by all devices connected). AGP 8x has a 32-bit bus width, operates at 533 MHz, and provides a maximum (dedicated) bandwidth of 2.1 GB/s. Each PCI Express lane is capable of 250 MB/s in each direction, and as advances in the necessary silicon technologies are realized, that number can be expected to quadruple. Presently, a 164-pin x16 slot can be expected to provide around 4GB/s of usable bandwidth in either direction, which is almost double the 2.1GB/s bandwidth that AGP 8x could offer! Definitely an impressive increase, and as the technology is refined, it will be very interesting to see the performance scale up. In the previous paragraph, I mentioned that the x16 slot features 164 pins. Each of the different lane configurations is accompanied by a different physical connector, and a sampling of an x16, x8, x4, and x1 can be seen in the graphic on this page (http://www.amdboard.com/pci-express.html). For a real world example, the Chaintech VNF4 Ultra Athlon 64 Socket 939 motherboard (http://www.geeks.com/details.asp?invtid=VNF4ULTRA-N&cat=MBB) shows an actual installation of one x16 slot and two x1 slots. Graphics cards are obviously going to benefit the most from the power and performance available with PCI Express, but as mentioned, other devices will also be able to take advantage of this new standard. The example of a network adaptor is just one that not only can use PCI Express, but will also see performance benefits. A Gigabit Ethernet adaptor will be more likely to actually achieve its rated speed thanks to the main bottleneck being removed in the form of the slower, narrower PCI Bus. Other bandwidth intensive devices, such as RAID controllers, can also be expected to jump off of the slower PCI Bus and find a smoother ride on PCI Express. Although PCI devices requiring less bandwidth may not see any performance benefits from going to PCI Express, as the standard achieves greater mainstream acceptance, the cost implications may find these devices shifting over anyway, just as happened with the transition from ISA to PCI. Final Words The higher speeds and flexibility available from PCI Express have it destined to not only be the successor to AGP 8x, but to PCI as well. The immediate performance increase over the older technologies is quite impressive, and given time the benefits will be even greater. Only time will tell how long this transition will take, but somewhere in the not-too-distant future we will be talking about motherboards that only support PCI Express, and AGP and PCI will go the way of the lowly ISA slot.
Wireless Networking, Part 1: Capabilities and Hardware These days it isnt uncommon for a home to have multiple personal computers, and as such, it just makes sense for them to be able to share files, as well as to share one Internet connection. Wired networking is an option, but it is one that may require the installation and management of a great deal of wiring in order to get even a modestly sized home set up. With wireless networking equipment becoming extremely affordable and easy to install, it may be worth considering by those looking to build a home network, as well as by those looking to expand on an existing wired network. The first installment in this two-part series of Tech Tips will provide an introduction to the basic capabilities and hardware involved in wireless networking. Once that foundation has been established, well take a look at a few setup and security related considerations that should be addressed once the physical installation is complete. Capabilities The basic standard that covers wireless networking is the Institute for Electrical and Electronics Engineers (IEEE) 802.11, which is close kin to the wired Ethernet standard, 802.3. Many people will recognize 802.11 more readily when accompanied by one of three suffixes (a, b, or g), used to specify the exact protocol of wireless networking. The 802.11a protocol first hit the scene in 2001, and despite a small surge in recent popularity, it is definitely the least common of the three at this time. The signals are transmitted on a 5 GHz radio frequency, while b and g travel on 2.4 GHz. The higher frequency means that the signal can travel less distance in free space and has a harder time penetrating walls, thus making the practical application of an 802.11a network a bit limited. The maximum transfer rate, however, is roughly 54 Mbps, so it makes up for its limited range with respectable speed. As mentioned, 802.11b and 802.11g networks operate on a 2.4 GHz radio band, which gives a much greater range as compared to 802.11a. One downside to being on the 2.4 GHz band is that many devices share it, and interference is bound to be an issue. Cordless phones and Bluetooth devices are two of many items that operate at this frequency. The range of these two protocols is about 300 feet in free air, and the difference between the two comes down to speed. 802.11b came first, released back in 1999, and offers speeds up to 11 Mbps. 802.11g first appeared in 2002 and it is a backwards compatible improvement over 802.11b and offers speeds up to 54 Mbps. On top of these protocols, some manufacturers have improved upon the 802.11g standard and can provide speeds of up to 108 Mbps. This doesnt involve a separate protocol, but just a bit of tweaking in areas like better data compression, more efficient data packet bursting, and by using two radio channels simultaneously. Typically, stock 802.11g equipment is not capable of these speeds, and those interested need to shop for matched components that specify 108 Mbps support. I say matched components as this is not a standard protocol and the various manufacturers may take different approaches to achieving these speeds. In order to ensure the best results when trying to achieve these elevated speeds, components from the same manufacturer should be used together. For instance, only Netgear brand network adaptors rated for 108 Mbps data transfer should be used with something like the Netgear WG624 wireless router (http://www.geeks.com/details.asp?invtid=WGT624NAR). Considering your typical broadband Internet connection is going to offer data transfer rates of 10 Mbps or less, it can be seen that even 802.11b would be more than adequate if you just want to surf the web. Sharing files on your LAN (Local Area Network) is where the faster protocols will really make a difference, and comparing the prices of 802.11b and 802.11g components may show that there is little to no difference in selecting a g capable device over a comparable b capable device.
Hardware Access Point Wireless Access Point (WAP) is the central device that manages the transmission of wireless signals on a network. A base access point may be capable of handling up to 10 connections, and more robust APs may be able to manage up to 255 connections simultaneously. The D-Link DWL-1000AP+ (http://www.dlink.com/products/?pid=37) is an example of a wireless access point capable of 802.11b transmissions. Router In somewhat technical terms, a router is a network device that forwards data packets. It is generally the connection between at least two networks, such as two LANs, or a LAN and ISPs (Internet Service Providers) network. For our purposes, and for the sake of simplicity, a wireless router is basically an access point with the added feature of having a port for sharing a broadband Internet connection. The D-Link AirPlus G (http://www.geeks.com/details.asp?invtid=DI524-R&cat=NET) is an 802.11g capable router that provides access for numerous wireless connections and four hard-wired connections to one WAN (Wide Area Network Internet) connection. A typical router for home use will generally cost less than an access point, and via settings within the firmware, can be used as just an access point anyway. Wired or wireless, all the computers using the router can share files over the network, as well as sharing a broadband internet connection. Communication between wireless computers (or a wireless computer and a wired computer) will max out at 54 Mbps, while communication between wired computers will take full advantage of the 100 Mbps provided via the 802.3 protocol. Network Adaptor A network adaptor is required for every computer that you would like to be connected to the wireless network. Many laptops, such as this Sony Centrino 1.5 GHz (http://www.geeks.com/details.asp?invtid=PCGZ1RA-R&cat=NBB) now include a wireless adaptor built in, so no extra hardware is needed. For those with systems that dont have wireless capabilities built in, adding them is fairly simple, and can be done using a variety of connections. Desktop computers can go wireless by adding a PCI slot network adaptor such as the 802.11g capable D-Link DWL-G510 (http://www.dlink.com/products/?pid=308). Notebook users can easily add wireless connectivity by using a PCMCIA adaptor, such as this 802.11g capable device (http://www.geeks.com/details.asp?invtid=PBW006-N&cat=NET). And for truly convenient plug-n-play connectivity to wireless networks, USB adaptors such as this 802.11g capable dongle (http://www.geeks.com/details.asp?invtid=80211GWUD&cat=NET) are available. Antenna/Extender These items are not essential, but given the specifics of a wireless environment, they may be helpful. Devices such as the Hawking Hi-Gain Antenna (http://www.geeks.com/details.asp?invtid=HAI6SIPN&cat=NET) or the Super Cantenna (http://www.geeks.com/details.asp?invtid=SCB10&cat=NET) serve the purpose of increasing the wireless signal strength, and therefore extend the range of a given wireless network. Not only can a large area of open space be covered, but the signal quality may be improved in structures with walls and floors that obstruct the signal transmission. Final Words In this Tech Tip, we took a look at the basics of wireless networking as it relates to capabilities and hardware. In the second part of this two-part series, we will look at some of the basic setup and security considerations that should be addressed. The physical installation of a wireless network may be exponentially easier than a wired network, but the more difficult part is setting up the software and security to make sure everything stays up and running without incident.
Wireless Networking, Part 2: Setup and Security The first installment in this two-part series of Tech Tips provided an introduction to the basic capabilities and hardware involved in wireless networking. In the final installment of this two-part series, we will look at some of the basic setup and security considerations that should be addressed. The physical installation of a wireless network may be easier than a wired network, but the more difficult part is setting up the software and security to make sure everything stays up and running without incident. Although this Tech Tip is by no means an exhaustive resource on configuring a wireless network, it will provide information and pointers that can be applied to most typical installations. Many of these tips are general enough that they may provide some good advice for those utilizing wired networks as well. For the sake of this article, we will assume that the hardware has been successfully installed physically, and that the user is now prepared to set up and secure the system through software. Wireless devices, especially routers / access points, generally include a web-based configuration utility that allows the user to customize the hardware to meet their needs. The hardware will most likely work with minimal configuration, but to make it work so that the integrity of the network is protected may take a few more steps. In addition to the configuration interface provided with the wireless networking hardware, Microsoft has integrated a Wireless Network Setup Wizard with the release of Windows XP Service Pack 2 that will lead a user of any expertise through the installation of their network. In addition, the Microsoft Broadband Network Utility will help them monitor and maintain the network just as easily once it is set up. Change Default Password Routers, whether wired or wireless, require a password for configuring the various settings, and all of them ship with extremely simple default passwords. The first step taken in setting up the router should be to change the default password to something more difficult to guess. Longer passwords that use a combination of letters and numbers are preferable as they make hacking attempts that much more difficult. Change Router IP Address Most routers ship with a default IP (Internet Protocol) address, something like 192.168.1.1, which is utilized by the user for accessing the configuration utility interface, as well as by the network itself for negotiating the LAN and WAN connections. The configuration utility of most routers will include a page that will allow for the default IP address to be manually changed by the user. Although changing the default IP address doesnt provide a great amount of security since it can easily be discovered anyway, it may deter intrusion by local users that may be casually scanning the network. Configure Router or Access Point Use In the first part of this series of Tech Tips, I mentioned that almost all routers intended for home use can also double as wireless access points, and this is generally accomplished by clicking a check box within the control panel software. If a wireless router is being added to a network with an existing router and broadband connection, the new device needs to be set to access point mode. Otherwise, there could be a conflict as the network may not know where to expect the internet connection, since it will now have two routers that both want to serve as the gateway. If
the wireless router is replacing an existing router, or is the only one on the network, this should not be an issue as these devices generally ship configured to operate as a router by default. Broadcasting the SSID The SSID, or Service Set Identifier, is basically the name assigned to a particular wireless network. The user can choose just about any name they want, as long as it is less than 32 characters long, and they just need to be sure that all computers on the network are configured to use the same name. Two steps related to the SSID can be taken to help improve the security of the network: First, change the default SSID to a unique name that includes a combination of letters and numbers that doesnt reveal anything personal about you or your network. Second, disable the broadcast of the SSID once all of your computers are successfully connected, even if your router / access point recommends broadcasting it. I have used a few wireless routers, and all of them have a check box in the control panel for enabling/disabling the broadcast of the SSID, and they have all recommended leaving broadcasting enabled. Broadcasting the SSID allows new computers to easily find your network, and then all they have to do is access it given the proper credentials. Broadcasting your SSID puts it out there for anyone within range to see, and it just allows would-be hackers to get one step closer to compromising your security. In a home environment, there are probably few computers that need to access the network, and if more are ever added, you can temporarily enable the broadcast to get them set up. DHCP Server The DHCP (Dynamic Host Configuration Protocol) Server is a feature of most routers that makes adding new computers extremely simple. Whenever a new computer connects to the network, the router will assign an IP address to it, instead of the user having to assign an IP address to each manually while sitting at that particular computer. This makes configuring a network very easy, but it also leaves the network vulnerable, as any new computer detected will be welcomed to the neighborhood and assigned an IP address automatically. Two different approaches can be taken to improve security, as related to the DHCP server: One method, and the best as far as security is concerned, is to disable the DHCP server. This will require that all computers that are authorized to connect to the network be configured manually, but it will prevent unauthorized computers from obtaining an IP address. The second method, which doesnt provide bulletproof security, is better than doing nothing. In general, a DHCP server can support up to 250 computers, and by default leaves a range of addresses readily available for that many to connect. If disabling the DHCP server doesnt seem convenient for a user, they can limit the DHCP server to only provide as many IP addresses as they know they need. If you know there will never be more than five computers connected, limit the range of available IP addresses to a total of five within the configuration utility. Different Levels of Encryption All wireless components support some sort of encryption, which simply scrambles the information being sent across the network so that it can not easily be read by anyone else connected to the network. There are different types and levels of encryption, and a brief overview is provided for them below: WEP, or Wireless Equivalency Protocol, was the first format of encryption available on wireless networks. WEP allows the network administrator to assign an encryption string to be shared by all computers authorized to access the wireless network. The encryption through WEP is either 64bit, 128bit, or 256bit, where the higher number represents greater encryption, and the strings can be generated by the administrator as a series of letters and numbers. WPA, or Wi-Fi Protected Access, is an improvement over WEP that starts off with a similar master encryption string and then mathematically derives encryption keys to keep the security dynamic. WPA continually changes the encryption keys used for each packet of data, and due to the extra processing required to support this protocol the overall throughput of the connection may suffer slightly. Despite the potential for decreased speed, WPA is considered to be far more robust than WEP, and should be implemented where possible. In some instances, WEP encryption has actually been defeated, making WPA all that more appealing.
Although most components support both of these encryption formats, and users can select the type they wish to use from within the control software, not all do. All devices on the network must be set to operate at the same level of encryption, which may mean that some devices will force others to be less secure than they are capable of. For example, a wireless network setup around this router (http://www.geeks.com/details.asp?invtid=DI-824VUP&cat=NET) could support either WEP or WPA encryption. When two computers are added to this network using one of these network adaptors (http://www.geeks.com/details.asp?invtid=WN-4054P&cat=NET) in one case, and one of these network adaptors (http://www.geeks.com/details.asp?invtid=PBW006-N&cat=NET) in the other case, things change. Note that the second adaptor does not support WPA; therefore the whole network must now be configured to use WEP to accommodate it.
Router Position As discussed in the first part of this Tech Tip, wireless devices can have a range of up to a few hundred feet in free space. When installed inside a home, this range may decrease greatly due to walls, floors and other obstructions, but the signal may still be strong enough to carry beyond the confines of the dwelling. A simple step that may help reduce the strength and reach of the network signal outside the house is to position the router / access point as close to the center of the house as possible. The potential for someone to detect the network from outside the home when positioned like this is now much less than if the router was placed near a window, for example. Final Words There are definitely additional issues that could be considered when setting up a wireless network, but covering these basics will make a wireless network much more secure than it was straight out of the box. Many people are confident that no one would be interested in their home network and feel security is just one more headache of technical mumbo-jumbo that they would rather not deal with. Whether a hacker wants access to personal files on the network or to simply gain unauthorized access to the Internet, a few simple steps are worth the peace of mind to know you are as secure as possible.
Choosing a Portable MP3 Player: Part 1 MP3 players are everywhere! It seems that the number of makes and models in this market is growing daily, with features and capabilities intended to appeal to just about anyone shopping for one of these devices. MP3 players have been around much longer than the Apple iPod (http://www.geeks.com/products.asp?cat=MP3#iPodMP3Players), but there is no arguing that this one device opened the market to a much larger customer base. In addition to Apples own success, the iPod paved the way for dozens of other manufacturers to offer their own twist on this technology. This series of Tech Tips will attempt to simplify things by taking a look at eight basic features of a typical MP3 player that may be important to a potential buyer, including: storage technologies, capacities, file formats, displays, batteries, extra capabilities, computer interfaces, and size. Part 1 of this series will handle the first four topics, and the second set of four topics will be covered in Part 2. Storage Technologies In general, portable MP3 players will utilize one of two formats to store the files on the device, either flash memory or a hard drive. Flash memory similar to that used in digital cameras is also found embedded in many lower capacity MP3 players. Due to the basic capacity limitation of flash memory, hard drive based units are required by those who need to store thousands of files on one device (or fewer files of higher quality). It may be feasible to find flash memory based players with capacities that range from 128MB to 1GB (or maybe a bit higher), and the MSI MegaStick 511 (http://www.geeks.com/details.asp?invtid=5511-290&cat=MP3) is an example of a 1GB flash memory based device. Hard drive-based units can provide much more space, and your typical Apple iPod (http://www.geeks.com/products.asp?Cat=MP3#iPodMP3Players) and Creative Zen (http://www.geeks.com/products.asp?Cat=MP3#ZENMP3Players) will use a hard drive in order to achieve their capacities of up to 40GB.
One of the key advantages of flash memory-based players is that they are "solid state", an old electronics term which used to mean "contains no tubes", but now basically means that a device contains no moving parts. No moving parts means fewer hardware breakdowns, longer battery life (playing time), and it means that the devices can be bounced around with no skips or damage to the device. If you're looking for a durable MP3 player to go jogging with or take to the gym, you probably want a flash-based player. There are other formats that may be used for portable MP3 players, and the Classic CM343R (http://www.geeks.com/details.asp?invtid=CM343R&cat=MP3) is an example of a device that plays MP3s from recordable CD media. Capacities The capacity of these players was already touched on in the previous section, but there is more to consider. Determining the capacity desired can have an impact on price and physical size, but the main thing to consider is how many files need to be stored on it. Several variables determine the quantity of music any given player may hold, namely file type and compression encoding bit rate. MP3 files, for example, may be encoded at bit rates ranging from low quality (64kbps) to high, up to 320kbps. Lower bit rates use less disk (or memory) storage space, but offer sound quality comparable only to a telephone call or AM radio. Higher bit rates, up to and exceeding that of CD quality sound, may be used, but of course take more space. As with all things, there is a trade-off between quantity and quality think of it terms of the number of hours of TV you can record to a VHS tape in SP, EP, and SLP modes. For the sake of discussion, we will use a decent bit rate of 128 kbps, which will turn 5 minute long songs into files of approximately 5 MB in size. Some simple math shows that a 128 MB device, such as this Egoman unit (http://www.geeks.com/details.asp?invtid=MD230F-N&cat=MP3), will only hold about 25 such songs, while a 40GB iPod can hold about 8000. A device intended to be used only for jogging may do just fine with 128 MB of storage, while a device used in the car, at the office, and elsewhere may benefit greatly from more storage space unless you like listening to the same handful of songs over and over again. Some players offer a base of onboard memory, plus the flexibility of adding more memory through the use of an expansion slot. These slots will accept flash memory, usually SD (http://www.geeks.com/products.asp?cat=RAM#SecureDigitalMemoryCard) or MMC (http://www.geeks.com/products.asp?cat=RAM#MultiMediaCardMemory), and it can be a cost effective way to add 512 MB to a 128 MB device, such as this one from Ultra Products (http://www.ultraproducts.com/product_info.php?cPath=37&products_id=48). File Formats
Calling these devices MP3 players may be a bit unfair, as most will actually read a few different file formats. MP3 is definitely the most popular, but other common formats include WMA (Windows Media Audio) and WAV (Microsofts Waveform Audio). Less common formats are also supported by some devices, such as AIFF (Audio Interchange File Format) and AAC (Advanced Audio Coding). In addition to MP3,the iPod supports these two formats and a few others that most other players do not support, which makes sharing these files with any other device just about impossible without conversion software. Software is available for creating MP3 files from audio CDs, as well as for converting digital audio files from one format to another. Titles are available for purchase from many companies, including the likes of Nero (http://www.nero.com/) and Roxio (http://www.roxio.com/), and other titles can be found as downloads, either free or as free trials. Displays Most MP3 players include some sort of display to help the user interact with the device. The size of the display will have an impact on the overall size of the unit, but larger displays can obviously contain more information, and may be easier on the eyes. Basic information regarding the status of the device and its files are generally shown on the display, and settings for things such as the volume and equalizer can be manipulated with ease. The LCD display on the iPod is one of its great features, due to its large size (2 diagonally), and its LED backlighting for comfortable viewing in any lighting condition. The iPod is a larger device, however, and it can afford to have a larger display to convey information on menus, song artists/titles, volume, date/time, equalizer, battery status, and so on. Smaller devices obviously have smaller displays, but they still need to convey some basic information. Using small fonts and symbols, allows devices like this one from Perception Digital (http://www.geeks.com/details.asp?invtid=PD099-256FM&cat=MP3) to display a good deal of information at any one time. What is lost is the ability to view menus and playlists, as seen on the iPod, and you may need better vision to see the smaller characters. Displays are a convenience item though, and some players have eliminated the display in the name of simplicity (and hopefully savings). The iPod Shuffle (http://www.apple.com/ipodshuffle/) doesnt include a display at all, and their slogan enjoy uncertainty expresses the fact that youll just have to go with the flow as your interaction with the device is quite limited. Final Words
For such small devices, there are many variables to consider when shopping for an MP3 player. The first part in this series covered four key items, and in the second part we will cover four more, including; battery, extra capabilities, computer interface, and size.
Choosing a Portable MP3 Player: Part 2 This series of Tech Tips is geared towards simplifying MP3 players for the casual consumer by addressing eight key topics. In the first part of this series, we looked at storage technologies, capacities, file formats, and displays, and in this part, we will wrap things up by looking at batteries, extra capabilities, computer interfaces and size. Batteries Battery type and expected life are key features when considering any type of portable electronics device, and MP3 players are no different. Many devices now come with rechargeable batteries included, and the more convenient arrangements allow for the batteries to charge while still in the unit, eliminating the hassle of having to remove them to be placed in a stand-alone charger. Even more convenient are devices that recharge via USB, so all you need to connect is one cable that serves both to transfer files, as well as to transfer power from the computer to the device. Many devices do not come with rechargeable batteries, but it is always an option to consider since most support a standard format, such as AA or AAA. A charger and a set of batteries (http://www.geeks.com/details.asp?invtid=V-1000) can be picked up relatively inexpensively, and over the course of the devices life the savings will add up when compared to the number of disposable batteries that would be used.
U T T U
Speaking of the number of disposable batteries that will be used, the life expectancy on one charge (or one set of batteries) is of great interest, but generally harder to gauge from the manufacturers information. Many devices do not publish a life expectancy, and those that do may need to be taken with a grain of salt. The conditions may vary from the manufacturers test to the real world, so it is always a good idea to find an independent review of the device to see how it fared. Some devices with a single AAA battery may run for up to 30 hours on one charge, while a device using two AA batteries may only make it to 10 hours. The iPod includes a rechargeable battery that provides a good amount of run time on each charge, but unlike the ones discussed so far, it is not readily available as an aftermarket replacement. If the battery dies, the unit needs to be shipped back to Apple for replacement, which proved to be quite unpopular with owners of previous generations of the device, as it seemed to be one of the devices few flaws. That issue has been addressed, but the battery in the new generation iPod is still proprietary, and can not be replaced by the end user. Extra Capabilities Many MP3 players offer greater value and convenience to users by doing much more than just playing MP3s. Some devices, such as the MSI MegaPlayer 515 (http://www.geeks.com/details.asp?invtid=5515040&cat=MP3), include FM radio tuners and voice recorders for greater appeal. There are numerous other
U T T U
handy features found on some devices that some people may find useful. Some will double as portable storage for any file type, some include basic e-mail clients, while devices with expansion slots can be used as a card reader when attached to a computer. Some devices are more appropriately called portable media centers, as they offer far more than just digital audio playback. Although a device such as the Creative Zen Portable Media Center (http://www.geeks.com/details.asp?invtid=70PF095000000-DT&cat=MP3) does play MP3 files, it also can play videos and show still pictures on its 3.8 color screen. Sonys new PSP (http://www.us.playstation.com/psp.aspx) is an exciting new portable device that takes things even farther by adding video games to the list (while still offering digital audio playback).
U T T U U T T U
Computer Interfaces The means for getting the files from the computer onto the MP3 player deserves consideration in respect to the protocol used, as well as the connection provided. Most devices utilize USB for file transfers, but Firewire is also an option, and this 40GB Apple iPod (http://www.geeks.com/details.asp?invtid=PE436A-NB&cat=MP3) actually supports both protocols.
U T T U
When selecting a device that utilizes USB, be sure to note whether it supports USB 2.0, or the much slower USB 1.1 standard, as units are still available using this older format. If you anticipate rotating your files regularly, or have a large capacity player to fill, the speed of a USB 1.1 device may frustrate you. USB 2.0 offers transfer rates up to 40 times faster than USB 1.1 (480 Mb/s versus 12 Mb/s), so keep that in mind when preparing to move a few thousand files! In addition to the protocol used, the physical connection may be worth paying attention to. Many devices, such as this Perception Digital player (http://www.geeks.com/details.asp?invtid=PD099-256FM&cat=MP3), offer a mini connection on the body for connecting a somewhat special USB cable for data transfer. If you want to add files to the device, you need to carry the cable with you, or take a chance that this type of cable would be available at any computer you may wish to connect to. Other devices, such as this Z-Cyber Zling player (http://www.geeks.com/details.asp?invtid=8MMC-ZLGU2-512&cat=MP3), feature a standard USB male connector right on the body of the device. With this design, the player can either be plugged directly into an available USB port, or if the size/shape of the device prevents this, a more typical USB cable is all that is needed.
U T T U U T T U
Size The size of an MP3 player will be in large part determined by the combination of other features included with the device. Hard drive-based players are generally going to be larger than flash memory-based players due to the physical size of the drive. The type and quantity of batteries, the size of the display, and the type of computer interface provided may all impact the size of the device as well.
MP3 players are generally quite small, and for the most part are only as large as they are for two simple reasons: so the users don't lose them, and due to other technologies involved. For example, some MP3 players that utilize two AAA batteries for power are barely wider and slightly longer than the batteries themselves, providing just a little thickness upfront for the flash memory, circuitry, and display. If a smaller, reliable power source was available, who knows how small these devices could be.
Someone seeking a miniature device with a modest amount of storage for use while jogging may be able to find something about the size and weight of a pack of gum. And although an iPod can in no way be considered a large object, in the world of MP3 players it is bigger than most, and is geared towards a different application. Final Words MP3 players come in a wide variety of shapes, sizes, and capabilities, all of which need to be addressed while considering what may be the most important feature Price. Given the great number of devices on the market today, there just may be a device out there to fit everyones needs, and hopefully this Tech Tip will serve as a guide to what a users basic needs may be.
Building Your Digital Music Collection The previous two Tech Tips (http://www.geeks.com/pix/techtips.htm) took a look at eight basic features of portable MP3 players worth considering before laying down some serious money on one of these devices. Once you have a nice new MP3 player (http://www.geeks.com/products.asp?cat=MP3) with plenty of space for music, you need to fill it up! There are several ways to go about building your digital music collection, and well take a look at a few ways to do so.
U T T U U T T U
The first thing to address may be the term MP3 player. Many of these devices play MP3 files, in addition to a variety of other formats. Many of the files available for download are actually in a format other than MP3, but the term has been applied to cover this whole class of devices, whether it is 100% accurate or not. Create Your Own There are numerous software titles available that make creating MP3 files from CDs (or other sources) a simple process. Most involve minimal input from the user once they have configured their preferences, and will take the audio and convert it into the digital format of their choice. During the ripping process, most applications will query an online database, such as Gracenote (www.cddb.com), and take care of the file naming and ID tagging needed to make storing, sorting, and accessing the files a snap with most players.
U T T U
Some of these applications may already be on your computer. Microsofts Windows Media Player (http://www.microsoft.com/windows/windowsmedia/mp10/default.aspx) is one program that any Windows user already has that is more than ready for basic WMA and MP3 file creation. Just drop in your CD and click Rip. Many other titles may have come bundled with hardware included with your system. For example, many optical drives ship with a copy of Aheads Nero (http://ww2.nero.com/us/index.html) or a suite of software from Roxio (http://www.roxio.com/en/index.jhtml). Either will handle the DVD or CD burning they were intended for, but also have decent MP3 creation modules, as well.
U T T U U T T U U T T U
There are a multitude of free, or at least free-to-try, MP3 encoding software titles, and a trip to your favorite search engine may provide a list longer than you care to investigate. Some names worth checking out include EZ CD-DA (Digital Audio) Extractor (http://www.poikosoft.com/), EZ MP3 Creator (http://www.linasoft.com/ezmp3c.html), and Virtuosa (http://www.virtuosa.com/index.php).
U T T U U T T U U T T U
The great thing about digital audio files acquired this way is that they are yours to use on whatever device you choose. The same can not be said about files obtained from either of the next two methods to be discussed. The files obtained from legitimate download services are protected by DRM (Digital Rights Management), which restricts the use of the downloaded files to a limited number of computers and compatible portable devices,
as well as protecting the songs from redistribution by the end user. The files are yours to use, but not as freely as you may want, and perhaps for only as long as you maintain your account with the download service. Pay Per Download There are two main types of legitimate online sources of digital music those that charge you for each download, and those that require you to subscribe to a service on a monthly basis. They offer the same types of files, but take different approaches to suit your budget and music needs. Apples iTunes (http://www.apple.com/itunes/) may be the best known source for individual file downloads, thanks in no small part to the incredible popularity of the iPod (http://www.geeks.com/products.asp?cat=IPD) MP3 player. What some may not know is that iTunes is not just for iPod owners, or Macintosh computer owners for that matter, but any PC compatible system can access the 99 cent downloads for use on their computer or compatible portable player.
U T T U U T T U
Many other outlets offer digital music files for download, and even some mainstream brick-and-mortar stores have found their way onto the scene. Just as they have done with retail sales, Wal-Mart (http://www.walmart.com/music_downloads/introToServices.do) has managed to undercut the competition with their 88 cent music downloads.
U T T U
Subscribe to a Service Everyone is familiar with Napster (http://www.napster.com/) as one of the pioneers of file sharing, but they are back with a legitimate approach to music downloads. Although they do offer a program where you can download individual songs for 99 cents each, they offer monthly subscriptions for $14.95. This monthly fee allows for unlimited downloads, and could be the ticket for someone looking to keep their play list fresh on a regular basis. One caveat to this otherwise good solution is that the number of MP3 players supported is currently very limited. Also, once your subscription lapses, so does the ability to access your music. Basically, you are renting the songs.
U T T U U U
Other subscription-based services are available, such as the one from eMusic (http://www.emusic.com/) that charges a monthly fee, but restricts the number of downloads permitted every month.
U T T U
Choosing between a service that charges for every download or one that charges a flat monthly fee will most likely be determined by the volume of downloads one intends. If you only want a handful of songs every few months, it may be worth it to pay per song. But, if you intend to amass the ultimate collection of music ever known to man, subscribing to a service on a monthly basis is obviously more practical. Go Underground Whether through first-hand experience, or from the massive media attention, most people are well aware of other file sharing resources available on the Internet that can be used for acquiring MP3 files. Although the files are free, and users may feel they are operating anonymously, it may not be a safe means of acquiring media.
There are the obvious legal implications, as the RIAA has prosecuted file sharers for copyright violation (http://www.internetnews.com/bus-news/article.php/3497246), but there are other issues, as well. The integrity of the files being downloaded is not guaranteed, and people may wait patiently for a song to download only to find it is of poor quality, incomplete, or even worse carrying a virus or trojan.
U T T U
So, there are other pools of digital music, but swim at your own risk! Final Words Filling your new MP3 player doesnt have to cost anything except the time it takes to encode the songs from your favorite CDs. But, paying for a download service is a sure way to have the songs you want as they become available and at a fairly reasonable price. These arent your only options for acquiring digital music, but when taking other routes, proceed with caution.
Seven Things to Consider When Choosing a PDA In the early days, Personal Digital Assistants (PDAs) were not much more than glorified calculators with the ability to store contact information and brief notes. Now, the line between personal computer and personal digital assistant is blurred thanks to the advanced capabilities of these useful little devices. There are plenty of makes and models to choose from in the PDA market, and finding the right model to suit an individuals needs can be a dizzying challenge. This Tech Tip will take a look at seven basic things to consider when choosing a PDA in an attempt to help simplify the process. Software The software on a PDA is capable of running completely independent of your computer, but being able to share files and resources between them is one of the key convenience features. Software availability is one issue, but compatibility is another. In general, a PDA will come with one of two operating systems: Palm OS or Microsoft Pocket PC, each with its own very different approaches to running one of these devices. Palm OS is the modern version of the operating system that was found on some of the first PDAs, Palm Pilots. No longer just a dull, greyscale environment, the Palm OS is a sharp looking operating system with many software titles available (either included, as downloads, or for purchase separately) to do just about anything you would want to do on a PDA. Microsoft Pocket PC is themed after the familiar Windows operating system, and this similarity gives the millions of Windows users a comfortable environment to work with when transitioning to a PDA. The basic commands are the same, and of all the software titles available for a PDA running Pocket PC, many of them are reduced pocket versions of what might be found on a computer such as Microsofts Word and Excel. Multimedia applications are a strong point of the Pocket PC environment, with many titles developed to make these devices more enjoyable and versatile. Applications are available for either operating system to do just about the same thing, although specific titles available for one may not be available for the other. Speed The speed of the processors in PDAs is picking up, and some older desktop computers are being left in the dust by these little devices. Presently, the bulk of the devices on the
market are running at clock speeds of 300 MHz or higher, sometimes much higher. The HP Ipaq HX4700 (http://www.geeks.com/details.asp?invtid=FA282A_ABADT&cat=PDA) sports a 624 MHz Intel processor, which is also currently the processor found in the high-end Dell Axim X50v (http://www1.us.dell.com/content/products/productdetails.aspx/axim_x50v?c=us&cs=19 &l=en&s=dhs).
U T T U U T T U
In general, faster processors can be found in the Pocket PC devices, but that does not mean that Palm OS devices are slow. Many experts would argue that the Palm OS runs more efficiently, and may be able to get by on less system memory, which helps keep the speed comparison competitive. The applications for PDAs have been optimized to run well with less processing power, less system memory, and to occupy a minimal amount of disk space, so either type of PDA should be able to handle typical tasks well. Speed may be the main concern on a desktop PC, but the focus is a bit different on handheld devices, where other features are definitely more important. Connectivity Being able to connect a PDA to a computer or other device may be essential for utilizing all of the features to their full potential. Most now make connectivity to a PC via USB the base form of transferring data to the device, but there are a few wireless protocols that may be available on a PDA to make it even more convenient. Infrared is a short range protocol that can allow for a PDA to exchange data with another PDA, as well as with a compatible laptop or desktop computer. Many PDAs feature an infrared port, but not all computers do. The communications on this protocol are relatively slow, but may be useful for transferring basic data or synchronizing with a PC. Bluetooth is a protocol that operates on the 2.4 GHz radio band that provides greater range than infrared, but the speeds still arent that great. Bluetooth can be used for transferring data from computers, but it can also allow other devices to connect to a PDA. For example, a Bluetooth-enabled PDA linked to a Bluetooth-enabled cell phone may be able to access the Internet using the phone as a sort of wireless modem. WiFi, just as on your typical computer, is the high speed wireless protocol (802.11x) that also operates on the 2.4 GHz radio band. WiFi will not only allow compatible devices to exchange files, but the high speed will make streaming media and web surfing convenient at home, or at the numerous wireless hot spots popping up in public/commercial settings. Extra Features & Accessories It isnt enough for PDAs to keep track of serious business items like appointments and contact information any more. They need to be able to multi-task, and most are now able
to help their owners work and play. Many of these features may be considered nice extras by some, but others may insist on their availability when choosing a PDA. Many PDAs can now surf the Internet, stream movies, or play video games thanks to the higher resolution color screens. Most now include a stereo sound processor for listening to MP3s or other audio sources, as well as being able to double as a digital voice recorder. Other PDAs take the fun features to a whole new level. Some PDAs now double as mobile phones. Or is it that mobile phones that are doubling as PDAs? Many mobile phones have built-in cameras now. PDAs like the PalmOne Zire 72 (http://www.palmone.com/us/products/handhelds/zire72/index_gps.epl) include a digital camera for capturing stills and video clips at decent resolutions. Taking another look at the Zire 72 shows off another feature that has become popular in PDAs GPS (Global Positioning System) receivers that will help plot and track routes during travel.
U T T U
Many PDAs also offer a wide array of extra add-on accessories to add ease of use and maybe a touch of flash to your PDA (http://www.geeks.com/products.asp?cat=PDA#PDAAccessories). For example, most manufacturers offer such things as spare batteries, extra styluses, and upgraded leather cases. Other cool items may be a foldable keyboard that can be plugged into the PDA for easier typing (http://www.geeks.com/details.asp?invtid=967208-0403-CS&cat=PDA), automobile charger so that you can charge your PDA while driving (http://catalog.belkin.com/IWCatProductPage.process?Merchant_Id=&Section_Id=1979 &pcount=&Product_Id=160197) and even international power adaptors to use your PDA in different countries (http://www.palmone.com/us/products/accessories/chargerscablescradles/3172WW.html) Like cellphones, some PDAs even come with removable covers so that you can totally customize how your PDA looks (http://www.geeks.com/details.asp?invtid=P10723U&cat=PDA).
U T T U U T T U U T T U U T T U U T T U
Expandability PDAs generally come with a base amount of memory on board for storing data and applications, but they can be filled up quite quickly. Most of these devices now include slots that will accept some sort of flash memory to increase their capacity (Click here for Tech Tips on Flash Memory): Part I - (http://www.geeks.com/pix/techtips-020305.htm) and Part II - (http://www.geeks.com/pix/techtips-021005.htm). Typical formats supported by PDAs include: Compact Flash (http://www.geeks.com/products.asp?cat=RAM#CompactFlashMemory), SD (http://www.geeks.com/products.asp?cat=RAM#SecureDigitalMemoryCard), MMC (http://www.geeks.com/products.asp?cat=RAM#MultiMediaCardMemory), or Memory Stick (http://www.geeks.com/products.asp?cat=RAM#MemoryStickFlashCards).
U T T U U T T U U T T U U T T U U T T U U T T U
Flash memory is fairly cheap these days, but selecting a PDA based on a format that one already uses could be a good idea. An SD card, for example, could be shared between a
PC, digital camera, MP3 player, and a PDA to move various multimedia and application files between the devices. A PDA such as the HP iPaq rz1715 (http://www.geeks.com/details.asp?invtid=IPAQRZ1715-R&cat=PDA) offers an SD/MMC slot for expandability, while the Dell Axim X5 (http://www.geeks.com/details.asp?invtid=AXIM-400-K1&cat=PDA) offers the convenience of multiple flash memory slots, providing support for Compact Flash, SD, and MMC.
U T T U U T T U
In addition to choosing the proper format of flash memory for your PDA, it may also be necessary to consider the capacity of the card chosen. Although you may be able to purchase a 2GB flash memory card, for example, that does not mean every PDA will be able to access it. Check the PDA manufacturers specifications, especially on older models, to confirm that there is no maximum card size that it will accept. In addition to allowing additional storage space, expansion slots can be used for other devices, as well. GPS receivers are popular accessories for PDAs, and can be found with different interfaces to match the slots available on a PDA. Leadtek is one manufacturer offering both Compact Flash (http://www.leadtek.com/gps/gps_9534_1.html) and Secure Digital (http://www.leadtek.com/gps/gps_9534_1.html) based GPS receivers. Another innovation, from companies such as SanDisk, are combination memory and WiFi cards (http://www.sandisk.com/retail/sd.asp?nav=mobile). However, be sure to check compatibility with your current PDA as some PDAs have problems using the extra features.
U T T U U T T U U T T U
Size The overall size may vary from model to model, but in general the length and width of a PDA will be in the range of a 3 x 5 index card. These values may vary slightly, and a difference of a few fractions of an inch may be important to a user with specific space constraints. For example, comparing the measurements of a Toshiba 830w (http://www.geeks.com/details.asp?invtid=PD830C-00002-CA&cat=PDA) (5.31 by 3.03 by 0.65), to those of a Compaq Ipaq 3950 (http://www.geeks.com/details.asp?invtid=269808-001&cat=PDA) (4.54 x 3.00 x 0.61), shows that the width and thickness are nearly identical. But, the difference of more than in height may be a big deal when trying to fit into a briefcase pocket, or even a shirt pocket.
U T T U U T T U
The weight may be directly related to size, and is perhaps another relevant area worth considering. The weight of a typical PDA may be just a few ounces (several hundred grams), so you obviously arent going to strain yourself toting it. The difference between one model and another may be due to the variety and quantity of extra features included, and in general, an individuals requirements for functionality may influence the overall weight.
Another key aspect when referencing size as it applies to PDAs is the screen size. Large screens, with higher resolutions, will allow the user to operate more comfortably. Using the same two PDAs referenced in this section, we see that the Toshiba 830w sports a 4.0 (480x640 resolution) screen while the Compaq Ipaq 3950 has a 3.8 (240x320 resolution) screen. So, the overall size of the Toshiba is a bit larger but it makes good use of the space by including a larger display capable of twice the resolution of the Compaq. Battery Life Most PDAs are now provided with an integrated rechargeable battery which recharges while connected to a base station or power adaptor. The most common type of rechargeable battery may be Lithium Ion (Li-Ion), and a general gauge of a batterys capacity is provided in terms of mAh (milliAmp hours). Although real-world performance will vary among devices and how they are used, a battery with a higher mAh value will be able to hold a greater charge and last longer between charges. The general use of the device will obviously play a role in how long the batteries will last, and is a difficult number to provide with any certainty. Manufacturers may provide a figure on the normal life between charges, but this is most likely based on occasional use, where it is idling for a majority of the time. This figure may indicate battery life of up to several days on one charge, but under more intensive operation, the battery life could be cut significantly. Watching a movie, listening to audio files, or playing games may drop the life on one charge from a few days down to a few hours. Use of wireless networking and display backlighting are two other things that can seriously impact battery life on any PDA, regardless of the type or quality of batteries included. Your best bet for information on battery life is to seek out independent reviews or owner comments on a PDA of interest. Final Words With all of the options available, PDAs are far more versatile than they ever used to be. Finding a device with the flexibility to be useful for both business and pleasure doesnt have to be a daunting task if a few key considerations are identified early in the process. A good balance of performance, size, and capabilities should be easily achieved, while still respecting a reasonable budget.
Computer Mice Every computer user can hopefully identify their mouse and the importance it plays in the daily operation of their computer. Mice are nothing new and for the most part are nothing overly complex, but the average user may not be familiar with all of the options and technologies that may go into these little devices. This Tech Tip will take a look at some of the features of mice that people may take for granted, or may otherwise be unaware of. Tracking Technologies Mechanical mice - Mechanical mice were the first ones used on computers, and can still be found for sale (http://www.geeks.com/details.asp?invtid=GN115&cat=MOU), despite the advances of tracking technologies. These mice feature a hard ball on the underside that rolls as the mouse is moved, and rollers inside the mouse allow the physical motion to be translated to the pointer on the screen. Some ball mice are a bit more advanced and replace the internal rollers with optical sensors, but the same principle applies.
U T T U
Mechanical mice require occasional maintenance to keep the ball and rollers free of lint and other debris, and with numerous moving parts there is always a potential for problems. The use of a mouse pad is recommended for these mice as they not only provide a clean surface to work on, but also provide the needed resistance for the ball to roll smoothly. The precision of mechanical mice is not particularly good, and although they may be fine for typical desktop work, they were never quite up to the task of detailed graphics work or serious game playing. Optical mice - Optical mice replace the whole ball/roller assembly of mechanical mice with a beam of a light and an optical sensor. The beam of light shines down on the surface below the mouse and the sensor uses the light to track the movement. The images included with the listing for this optical mouse (http://www.geeks.com/details.asp?invtid=HTM-39GWT&cat=MOU) provide a glimpse at the bottom of the mouse, where the light and sensor can be seen.
U T T U
Optical mice have several advantages over mechanical mice. There are no moving parts to break or otherwise cause problems. The need for maintenance is greatly reduced as the bottoms have no openings or rollers to collect lint. Although mice generally arent heavy, the elimination of the ball and roller
mechanism allows an optical mouse to be much lighter than a comparable mechanical mouse. The precision of optical mice is also much greater than mechanical mice, and the resolution can go from the low hundreds to the high hundreds (as measured in dpi, dots per inch). Another advantage is that the need for a mouse pad may be eliminated with an optical mouse, as they do best while tracking on any smooth, flat surface. A clean desktop is generally good enough, but those looking to take the precision of optical mice to the highest level may opt for a performance mousing surface. There are several precision mousing surface manufacturers, such as XRay Pads (http://www.xraypad.com/) and FUNC Industries (http://www.func.net/), that design pads to appeal to game players and others who demand the best performance.
U T T U U T T U
Laser mice - Laser technology is the latest and greatest in computer mice tracking, and takes the advantages of optical to a new level. Most of the attributes of a laser mouse have been described in the optical mouse section, except for one. Instead of a fairly wide beam of light, it uses an extremely narrow beam of laser light. The Logitech MX1000 laser mouse (http://www.geeks.com/details.asp?invtid=931175-0403-DT&cat=MOU) may be the mouse for you if you are looking for extreme precision. According to Logitech, the laser technology used in the MX1000 provides up to 20x more sensitivity to surface detailor tracking powerthan optical.
U T T U
Hard Wired Connection Technologies Serial Serial mice are fairly difficult to come by these days, as are the ports they need in order to operate. This technology is quite old and slow, and the popularity and convenience of USB has all but eliminated the need for this interface on your typical PC. But, there were mice that sported the 9-pin connection needed to connect to a serial port, and many PS2 mice used to include an adaptor for Serial ports. PS2 - PS2 mice were the standard for a long time, as all motherboards provided two PS2 ports for connecting a keyboard and a mouse. USB technology has become so widely used that the slower and less convenient PS2 ports are on the verge of extinction with the Serial port. That said, not many mice are still sold that only support a PS2 connection, but there are still some available as with this unit from Genica (http://www.geeks.com/details.asp?invtid=GN-115&cat=MOU).
U T T U
USB Most mice can now connect via USB, and include an adaptor to be used on a PS2 port, as well. It seems that just about any mouse now uses USB to connect, whether it is a wired mouse, or any variety of wireless mice that we are about to look at.
Wireless Connection Technologies Radio Frequency The most common type of wireless mouse uses radio frequencies to communicate motion to a receiver that is connected to the PC. This generic wireless mouse (http://www.geeks.com/details.asp?invtid=RFM308-RC-USB&cat=MOU) operates on the 27MHz frequency and the mouse itself is powered by AAA batteries which are not included. As you move up the price scale of RF (Radio Frequency) wireless mice, the packages will generally include an integrated rechargeable battery, as does the Logitech MX1000 discussed previously. Other features of higher end RF wireless mice include extended range, greater precision, and a receiver that doubles as a battery charger.
U T T U
Bluetooth The Tech Tip on Bluetooth (http://geeks.com/pix/techtips011905.htm) discussed the basics of this wireless technology, and how it was a good fit for lower speed devices, like a mouse. Logitech (http://www.geeks.com/details.asp?invtid=930970-0403-DT&cat=MOU) and IOGear (http://www.geeks.com/details.asp?invtid=GME225B-DT&cat=MOU) are two manufacturers offering products for Bluetooth users, in addition to more traditional mice. The Bluetooth mice are also battery powered, and use the 2.4GHz radio frequency to communicate with an included receiver/charger or other Bluetooth adaptor.
U T T U U T T U U T T U
RFID A truly unique approach to wireless mice has been developed by a company called A4Tech. The A4Tech ND-30 RFID wireless mouse (http://www.geeks.com/details.asp?invtid=NB30-N&cat=MOU) must be used with the included mouse pad in order to function, but there are no batteries in the mouse, and no cords on the mouse to get in the way. It works by using electro magnetic induction between the pad (which is plugged into a USB port) and the wireless, non-powered mouse. You never have to worry about replacing / charging batteries, and the weight of the mouse is greatly reduced since the power features are no longer necessary.
U T T U
Features Buttons Most mice (except for a Macintochs) include at least two buttons. The use of these is fairly well understood, but other buttons may be featured on a mouse to further simplify common tasks. The Logitech MX 510 mouse (http://www.geeks.com/details.asp?invtid=931162-0403-DT&cat=MOU) features a total of 8 buttons which can be programmed to execute functions of the users choosing.
U T T U
Scroll wheels Many mice now include a scroll wheel between the two main buttons which serves to allow up/down scrolling of documents and web pages. The scroll wheel may also serves as a third button on some mice, and clicking
the scroll wheel will activate commands in many applications. More advanced scroll wheels are found on some mice that allow for left to right scrolling, which may be useful on a wide spreadsheet or large image. Extra features Many mice provide more than the basic functionality we have come to expect. Some provide a reduced foot print in order to make them more portable for use with a notebook computer (http://www.geeks.com/details.asp?invtid=931150-0403-DT&cat=MOU). Some are designed for multi-tasking and provide an integrated flash memory card reader (http://www.geeks.com/details.asp?invtid=KMOUSE-SDMSMMCN&cat=MOU). Then, there are others that just look cool (http://www.geeks.com/details.asp?invtid=BLK-3DOPT-N&cat=MOU) with a bit of a light show, or that actually keep you cool (http://www.logisyscomputer.com/viewsku.asp?SKUID=MS802SL&DID=KEYBO ARD) by including a small fan in the palm rest area.
U T T U U T T U U T T U U T T U
Final Words On the surface, computer mice are fairly simple devices that may not receive the attention they deserve by end users. Selecting a high quality, ergonomically designed mouse can do wonders for productivity and comfort, and the options available should allow anyone to find the right mouse for their personal preferences.
Digital Camera Basics The Vocabulary Shopping for a digital camera can be a difficult task considering the sheer number of choices out there. The number of manufacturers, models, and price ranges that need to be sorted out make the process difficult enough, but throw in all the buzz-words that need to be understood, and even a short list of cameras can become difficult to analyze. This Tech Tip will take a look at a few key words that may come up when researching a digital camera, and will hopefully reduce the headaches associated with the process. Pixels Digital images are composed of thousands or millions of tiny squares called picture elements, or pixels for short. Each square has its own color assigned to it, and the compilation of all of these little colored squares allows for images to appear smooth when viewed at original size. If an image is magnified several times, the appearance of the pixels can become more obvious, and at high magnifications each colored pixel can be distinguished individually. Megapixels Basically, the term megapixel means one million pixels, and it is used to describe the maximum number of pixels found in an image produced by a digital camera. It is generally the criteria used to classify cameras, and checking the Geeks.com selection (http://www.geeks.com/products.asp?cat=CAM) shows that their cameras are all sorted into ranges of megapixels (MP). Many people assume that because cameras are marketed so heavily by their megapixel specification, that this is the single most important criteria when choosing a camera. More megapixels do not necessarily equal better images, they mean larger images (both in physical size and in file size). The megapixel count is achieved by multiplying the number of pixels in one horizontal line by the number of pixels in one vertical line. So, if a camera can produce images at a maximum resolution of 1600 by 1200 pixels, it is a 1.92 megapixel (1,920,000 pixel) camera. It is not fair to assume that the images from the 5.0 MP Kodax CX7525 (http://www.geeks.com/details.asp?invtid=CX7525-R&cat=CAM) are automatically better than those from the 4.0 MP Kodak CX7430
(http://www.geeks.com/details.asp?invtid=CX7430-R&cat=CAM) strictly based on their megapixel count. All it means is that the maximum image size of the CX7525 is 2560 x 1920 and the maximum image size of the LS743 is 2408 x 1758. Many other features in the cameras can impact the quality of the images they produce, and may be far more important for the typical user to consider than the maximum overall size of each image. Larger image size may do nothing for a user who only wants to view images on his computer screen, or for use on the web, but the higher megapixel images are important for those looking to make prints of their images. Generally, higher pixel counts in an image translate to the ability to create larger prints. Sensors CMOS and CCD Digital cameras use a small sensor to capture the image before transferring it to flash memory for storage. Equivalent to a negative in a film camera, these sensors come in a variety of sizes, with most being between 20 and 40 millimeters squared. There are two types of sensors that may be found in cameras: CCD (Charged Couple Device) and CMOS (Complementary Metal Oxide Semiconductor). CMOS sensors are usually found in cheaper cameras and offer lower image quality than a CCD sensor that would probably be found in a more expensive camera. There is an exception to the rule that CCD is better than CMOS, and that is with the Digital SLR-type (Single Lens Reflex) cameras. They use a much larger sensor (greater than 300 millimeters squared) and can provide excellent image quality, but the quality does come with a much higher price tag. Zoom Optical and Digital Most digital cameras offer some sort of zoom, but it is important to identify which type is being provided. Optical zoom functions just as on a film camera, where the lens physically moves to produce the magnification. Digital zoom uses circuitry to enlarge a portion of the standard sized image and crops the content outside of the zoomed area. The quality of images produced using digital zoom suffer due to the nature of the process, and optical zoom is a far more desirable feature. The price of a camera with optical zoom may be a good deal more than one with digital zoom, but the quality of the images cannot be compared. The Kodak CX7330 (http://www.geeks.com/details.asp?invtid=CX7330-R&cat=CAM) and the Kodak CX7300 (http://www.geeks.com/details.asp?invtid=CX7300-R&cat=CAM) are comparable cameras in many regards, but the CX7300 features only digital zoom, while the CX7330 features both optical and digital zoom for about $30 more. Memory - Digital Film Previous Tech Tips took a look at the variety of flash memory available (Part One http://www.geeks.com/pix/techtips-020305.htm and Part Two
http://www.geeks.com/pix/techtips-021005.htm), and these items are what becomes the film in a digital camera. The two important things to consider when choosing flash memory for a digital camera is that the right format is chosen, and that a quality module is chosen that can record quickly and be ready for the next picture as soon as possible. A photographer looking to snap a rapid series of high resolution images on their Kodak DX7440 (http://www.geeks.com/details.asp?invtid=DX7440-R&cat=CAM) has many options in SD (secure digital) memory to choose from but, they would be far better off with something like the SanDisk Ultra II SD card (http://www.geeks.com/details.asp?invtid=SDSDH-512-901&cat=RAM) capable of a sustained write speed of 9 MB/s, than with a generic SD card (http://www.geeks.com/details.asp?invtid=SDMC256&cat=RAM) capable of a burst speed rated at only 2.5MB/s. Aliasing Even if you think you know the basic definition of this term, it may seem confusing in the context of digital cameras. Aliasing refers to the appearance of jagged edges generally seen on diagonal or curved surfaces in images. This effect is caused by the fact that all pixels are square, and that even non-square shapes in an image need to be created with square components. The solution to aliasing is not surprisingly called anti-aliasing. Through software, the edges generally affected by aliasing are blended and the jagged appearance can be made much smoother. Video game enthusiasts may be familiar with anti-aliasing and the impact the varying levels of it can have on the overall performance of game play. Although the technology is similar, the personal importance of having the best digital images possible makes applying anti-aliasing just about a no-brainer. Aspect Ratio Aspect ratio describes the shape of a digital image, or any image for that matter, where the first number represents the width of the image and the second number represents the height. People may be familiar with the term as applied to televisions (as 16:9 wide screen televisions are now all the rage to replace traditional 4:3 units), and the concept is the same here. Standard film cameras generally use an aspect ratio of 3:2, but most digital cameras have adopted a 4:3 aspect ratio so that images better fit on a standard computer monitor. Monitors with typical resolutions of 800x600, 1024x768, or 1280x960, for example, all have an aspect ratio of 4:3, so it only makes sense to produce images that will fit well on these screens. Although images can be manipulated to print on any size paper, special photo-quality paper (http://www.geeks.com/details.asp?invtid=C6944A) is available to allow for high-quality prints to be made at the correct aspect ratio.
Connectivity The pictures taken by a digital camera have to be extracted by some means in order to get them onto a computer, or perhaps directly onto a printer. In general, cameras provide a cable to connect to a computer either via serial, USB, USB 2.0, or Firewire. Serial ports are just about extinct at this point due to their slow connection speeds and lack of plug-and-play support, but some older or lower-end models may still offer serial connectivity. USB may be the most common form of connectivity, and if speed is important to you, look specifically for USB 2.0 support, as it is up to 40 times faster. Some specialized cameras may take advantage of the high speed Firewire protocol for connecting to a computer. Just about all modern computers come with at least a pair of USB ports, but not all computers include Firewire. Firewire capabilities can be added to any computer with an available PCI slot by using an expansion card such as this one (http://www.geeks.com/details.asp?invtid=UW-1394PCI-A01-N&cat=CCD). Some cameras dont require any cables at all, as they can transmit the images to a PC wirelessly. The Concord EyeQ (http://www.geeks.com/details.asp?invtid=EYEQ&cat=CAM) is such a camera that utilizes Bluetooth technology in lieu of wires. Even with all the modes of transmitting images listed so far, one other method may serve as a universal replacement for all of them. Many people find flash memory card readers (http://www.geeks.com/products.asp?Cat=CAM#CameraMemoryCardReaders) to be a quick and convenient alternative to using the cameras native means of connecting. You simply remove the memory from the slot on the camera and pop it into the appropriate slot on the reader, and then the computer system can access the card like a local disk drive. Final Words Every high-tech field has its own set of specific buzz words, and digital photography is no different. Although by no means an exhaustive resource of digital photography terminology, this Tech Tip provides insight to a few key terms worth knowing in order to make shopping for a digital camera just a bit easier.
Digital Camera Accessories: Digital cameras are great devices that have made photography simple and enjoyable to countless people. They make taking and sharing photos easier and more economical than film could ever do, but a digital camera may not be optimized as is straight from the retail packaging. It needs to be accessorized, and this Tech Tip will take a look at five accessories worth considering for use with any digital camera. Flash Memory Most cameras are sold with either a removable flash memory card included, or an onboard memory chip, for storing the images the camera takes. The problem is that the included memory may not be the best for many practical uses, and will need to be upgraded by the end user immediately. The memory provided with many cameras generally is of too low a capacity, and the performance of the modules may not be the best either. A camera taking images at the high resolutions possible today is going to chew up space on a flash card quickly, and the typical 16MB or 32MB card provided by the manufacturer just isnt going to cut it. I recently purchased a high-quality 3.2 Megapixel (MP) camera that came with a 16MB Compact Flash (http://www.geeks.com/products.asp?cat=RAM#CompactFlashMemory) card. I was somewhat surprised to see that I would only be able take 9 pictures at maximum resolution before the card was full, and wondered why the manufacturer bothered to include a memory card at all. The first step to making the camera more usable was to upgrade to a 512MB Compact Flash card that upped the total storage to 299 images while at the highest resolution. Taking a look around shows that this practice is common, and that even cameras at 5MP and above may include just a 32MB card. When purchasing a camera, this is something to pay close attention to, and if the camera seems to be a good fit otherwise, be prepared to buy a larger memory card at the same time. The other issue with flash memory is that they are not all able to read and write at the same speed. While a slower memory card may be more than adequate to keep up with playing any collection of digital audio files, when it comes to taking a steady stream of high resolution images, you need media that can keep up. SanDisks Ultra II series of SD cards (http://www.geeks.com/details.asp?invtid=SDSDH-512-901&cat=RAM) provides much higher read/write speeds than your typical SD cards (http://www.geeks.com/products.asp?cat=RAM#SmartMediaMemory), which are more than likely the type you will find included with a camera. Having a high-speed memory card may be the difference between capturing a string of high resolution action shots, and sitting in frustration as the light blinks on your camera indicating that it is still writing the previous image. These details and many others were covered in two previous Tech Tips dedicated to flash memory. If you missed them, please make sure to check out both Part 1 (http://www.geeks.com/pix/techtips-020305.htm) and Part 2 (http://www.geeks.com/pix/techtips021005.htm) of that series. Card Readers
A card reader (http://www.geeks.com/products.asp?cat=CAM#CameraMemoryCardReaders) may be just the accessory to consider for use with the new, high capacity flash memory card you picked up with your digital camera. These device can either be installed internally (http://www.geeks.com/details.asp?invtid=5069-6126&cat=CAM) or externally (http://www.geeks.com/details.asp?invtid=HE-660A&cat=CAM) to your computer, and provide a convenient way to get the files off of all the common flash memory formats used in cameras, digital audio players, and PDAs. Internal ones are great for use with systems that you know will always need these features, but external readers are just as handy, and can be taken with you for use on any desktop or notebook computer with an open USB port. Not only do these devices provide a convenient interface for accessing the various types of flash memory you may use in a camera, they may be able to do so quicker than using the standard cable provided with a camera. Most card readers take advantage of the high-speed transfers afforded by USB 2.0, and even if your camera supports USB 2.0, the cameras User Interface may slow you down. The card reader will treat any flash memory inserted into it as a disk drive and allow you to browse the contents and copy what you want onto your hard drive quickly. Some cameras use a proprietary software interface that may slow things down or otherwise be inconvenient, but if you can use Windows Explorer (or the equivalent in other operating systems) you can get your photos from a card reader with ease. Rechargeable Batteries Many cameras come with rechargeable batteries included, which can make life much easier (and less expensive). Cameras can drain batteries quickly thanks to the larger, color LCD screens that they now need to power, as well as the flash and the other usual functions. If a camera does come with rechargeable batteries, it may be worth investigating whether or not it uses a proprietary format, or one of the more common battery types (like AA or AAA). Proprietary batteries may have their advantages, but in general a camera that accepts a commonly available battery may be the most convenient for two reasons: First, if the rechargeable batteries should happen to run out of charge and you have neither the time nor facilities to recharge them, you can easily pop in common disposable batteries just to keep the camera up and running. This may prove to be particularly helpful while traveling when there is either no outlet for recharging, or you only have access to a foreign power source which requires an adaptor you dont happen to have. Secondly, another benefit of using a standard battery type comes when the batteries eventually die all together. Rechargeable batteries can only be charged so many times, and as they age their performance may begin to fade before they die all together. Replacing batteries of a standard format is easy and relatively inexpensive, while sourcing a replacement proprietary battery may be much more difficult and much more expensive. Rechargeable batteries (http://www.geeks.com/products.asp?Cat=CON#BatteriesandBatteryChargers) are generally sold based on their capacity, measured in units of mAh (milliAmp hours). Although real-world performance will vary among cameras and how they are used, a battery with a higher mAh value will generally be able to hold a greater charge and last longer between charges than a lowerrated battery. The previously mentioned 3.2 MP camera I own came with four rechargeable AA batteries rated at 1600 mAh. Although they last for a respectable amount of time between charges, replacing them with something like this four pack of 2300 mAh AA batteries (http://www.geeks.com/details.asp?invtid=NIMH-2300MAH&cat=CON) may provide a noticeable boost.
Carrying Case Protecting your cameras should be of some importance considering their cost and somewhat delicate nature. Although most cameras are sold with some sort of case, many arent much more than a form-fitting piece of vinyl, perhaps with a bit of felt backing if youre lucky. There are numerous camera cases out there that can provide more substantial protection for a digital camera, and configurations are available to allow for many of the accessories to be carried in the same case. A case doesnt need to be expensive to be effective, and even a basic unit, like this one from Lowepro (http://www.geeks.com/details.asp?invtid=Z5-PUR&cat=CAM), can provide additional padding, storage space for extra batteries and memory cards, and perhaps a strap or belt loop for carrying it. The important thing is to find a well-constructed case that fits your particular camera well, and offers ample padding to protect it from all the bumps and thumps it may incur while in transit. And if nothing on the market seems to fit your needs, you may get inspiration from this guide (http://www.rojakpot.com/default.aspx?location=3&var1=213&var2=0) to build your own camera case. Tripod Not all tripods are huge contraptions you would expect professional photographers to carry (http://www.geeks.com/details.asp?invtid=VPD-TRIPOD&cat=CAM), as there are miniature tripods (http://www.geeks.com/details.asp?invtid=BLK-TR-1&cat=CAM) that may be just as useful and easier to carry as well. The typical digital camera user may not think they have use for a tripod, but they can come in quite handy for a couple of reasons: First, depending on shutter speed, an image may become blurred due to even the slightest movement. If you dont have a steady hand, a tripod may be just the thing to ensure that your important shots are crystal clear. Close-up shots in particular may benefit from the steady support provided by a tripod, and anyone who operates their camera in macro (extremely close up) mode on a regular basis may be pleased with the outcome once a tripod has been added to their cache of accessories. Second, tripods can also prove quite handy for travelers. If there is no one around to take a picture of you posing with a historic landmark or lounging on the beach, a tripod and a camera with a timer is all you need. The UltraPod (http://www.pedcopods.com/) is one of the miniature varieties of tripods that folds up to make it convenient for travel, offers the typical features of a tripod, and as the image on their homepage indicates, it has one other interesting feature: an integrated Velcro strap allows the tripod and camera to be strapped to any narrow diameter object for even more self-supported photo opportunities. As a long-time owner of an UltraPod I, there have been numerous street signs, fence posts, saplings, etc., that have helped me get a picture I otherwise wouldnt have been able to take without such a device. Final Words Digital cameras are great devices that make taking and sharing photographs faster, and, lets face it, cheaper than ever before. Although they may be great on their own, a few key accessories can make them even more convenient, practical, and enjoyable to use. There are other digital camera accessories (http://www.geeks.com/products.asp?cat=CAM#CameraAccessories) that may be right for your specific needs, but the five discussed in this Tech Tip are universally worth considering by users from novice to expert.
Socket 478
For all of these form factors, the number following socket refers to the number of CPU pins or connectors. Socket 478 is a format specifically designed for Intels Celeron (http://www.geeks.com/products.asp?cat=CPU#Socket478CPUs(Cel eron)), Pentium 4 (http://www.geeks.com/products.asp?cat=CPU#Socket478CPUs(Pen tium4)), and mobile Celeron/Pentium (http://www.geeks.com/details.asp?invtid=MP42000478N&cat=CPU) processors. This socket supports processors with frontside buses of 400 MHz (100 MHz x 4), 533 MHz (133 MHz x 4), and 800 MHz (200 MHz x 4), and provides top speeds from below 1 GHz to higher than 3 GHz. The Intel naming system used for the Pentium 4 processors in this class uses letters to represent the frontside bus speeds present. An A means 400 MHz, B means 533 MHz, and C means 800 MHz. So, a Pentium 4 2.4C would offer greater performance than a 2.4B or a 2.4A, despite them all having the same 2.4 GHz clock speed. The Pentium class of processors, like many others from AMD and Intel, also apply names to the cores present on the processor. The core is the processing portion of the CPU, generally found at the center of the silicon wafer. As the architecture within cores of the same type of processor change, new names are given to signify the different levels of performance that users may
experience. The Northwood, Gallatin, and Prescott cores can be found on Pentium 4 processors of the same speed, and a main difference between the three is that the first two are fabricated on a 0.13 micron (one one-millionth of a meter) process and the Prescott is fabricated on a 0.09 micron process. A lower number implies a more tightly-packed core that generally requires less power to perform the same operations. Later generation Socket 478 Pentium 4 processors were the first to implement Intels Hyper-Threading Technology (http://www.intel.com/technology/hypert hread/), which allowed for applications to be run in parallel, thus improving the performance of the system. Although not nearly as strong, the concept is similar to having a dual processor system or dualcore processor, as systems with Hyper-Threading enabled can handle intensive applications much easier than the same system without Hyper-Threading. Socket 478 was Intels flagship format for several years, and continues to be a popular choice among Pentium 4 motherboards. Socket 478 motherboards were first compatible with SDRAM memory (http://www.geeks.com/products.asp?cat=RAM#168-pinDIMMMemory), then RAMBUS memory (http://www.geeks.com/products.asp?cat=RAM#184-pinRIMMMemory), and as it became more popular DDR memory (http://www.geeks.com/products.asp?cat=RAM#184-pinDDRDIMMMemory) became the format of choice. But, Intels decision to embrace DDR2 (among other developments) has resulted in a totally new CPU socket for Pentium 4 processors Socket T.
As Intel transitioned to the LGA (Land Grid Array) 775 format, they also transitioned to a three digit naming convention, similar to the PR (Performance Rating) grades that AMD had used for years. Instead of simply marketing a 3.0 GHz Pentium 4 Prescott core as such, they now refer to it as the Pentium 4 530. A 2.8 GHz Prescott is now called a 520, and a 3.8 GHz Prescott is called a 570.
Socket 603/604
These CPUs may not be as common as the others, but are worthy of inclusion on this list anyway. Socket 603 and Socket 604 are the home to Intels modern Xeon processors, which are more likely to be found in a high-end server than in a desktop computer. The Xeon can come with frontside buses ranging from 400 MHz (100 MHz x 4) to 800 MHz (200 MHz x 4), and provide top speeds that now reach 3.6 GHz. Socket 603 is for older, slower Xeons capable of a 400 MHz frontside bus, and Socket 604 adds the one pin simply to identify the faster bus speeds. Socket 603 processors will fit and function in a Socket 604 motherboard, but the opposite will not work. Xeons are powerful processors all on their own, but the architecture of the CPU and supporting motherboards allows for serious systems to be configured with dual processors (http://www.geeks.com/details.asp?invtid=X6DVL-EG-O-DT&cat=MBB), or perhaps even four processors (http://www.supermicro.com/products/motherboard/Xeon/GC-HE/P4QH6.cfm), to really make light work of even the most intensive application.
Final Words
From the data presented so far, there are more than enough variables to ponder from just Intels stable of processors. We are only halfway there, as the next part in this series of Tech Tips will turn its attention to the modern processors available from AMD.
CPU Socket Basics, Part 2 AMD: This two-part series of Tech Tips started with a look at a few details of each of Intels modern processors (See CPU Socket Basics, Part 1: http://www.geeks.com/pix/techtips19MAY05.htm), and how they are all similar and different from one to another. In the second and final part of this series, we will take a look at the CPU sockets supporting modern AMD processors. Socket A
Socket A was the staple format in AMDs line-up for years, carrying the brand through the Athlon, Thunderbird, Duron, Athlon XP, Athlon MP, and Sempron classes of processors. The number of pins found on the bottom of a Socket A CPU totals 462, and this is one of the few formats that is not named for the number of pins found on the processor. Socket A processors came with frontside bus speeds from 200 MHz (100 MHz x 2) up to 400 MHz (200 MHz x 2). Socket A processors ranged in top speeds from under 1 GHz to just over 2 GHz, but the identification of a Socket A processors true speed is difficult due to AMDs use of PR (Performance Rating) grades instead of true processor speed as a sales tool. For example, an AMD Athlon XP 3000+ (http://www.geeks.com/details.asp?invtid=AXDA3000DKV4D&cat=CPU) doesnt have a speed of 3.0 GHz, but rather 2.167 GHz. AMD contends that their architecture provides performance equivalent to the PR grade, despite the actual speed being much lower. Early Socket A systems supported SDRAM memory (http://www.geeks.com/products.asp?cat=RAM#168-pinDIMMMemory), but as the technology advanced and DDR memory (http://www.geeks.com/products.asp?cat=RAM#184-pinDDRDIMMMemory) became available, these processors took advantage of the increased performance. DDR memory offered greater overall performance at the same clock speeds as SDRAM, as well as the
potential for much higher clock speeds, so it was only a matter of time before it became the standard memory for use with these processors. Just like Intel, AMD cores in this series also have names to distinguish different levels of performance within the same class of processor. For example, Duron cores carry names such as Morgan and Applebred, while Athlon XP processor cores have names such as Palomino, Thoroughbred, and Barton (listed in order from weakest to strongest). The long-lived success of Socket A is winding down, as the last date for orders to be placed for production has just passed (http://theinquirer.net/?article=21377). As we say goodbye to Socket A, AMD intends to focus more attention on similar processors for Socket 754, as well as the higher-end Socket 939/940 formats. Socket 754 Socket 754 picks up where Socket A left off, offering support for CPUs including the Sempron and Athlon 64. As the name implies, the processor features 754 tiny pins that interface with the socket on the motherboard. As processors develop, the number of pins manufacturers can fit on the back becomes more and more amazing. The Sempron class of processor is more of a budgetconscious offering, with current PR grades available from 2600+ to 3300+, while the Athlon 64 is the more powerful 64-bit offering with PR grades from 2800+ and climbing past 3700+ (http://www.geeks.com/details.asp?invtid=ADA3700BOXDT&cat=CPU). All Socket 754 processors support a frontside bus of 400 MHz (200 MHz x 2), and presently all systems utilize DDR memory. Socket 939/940 Socket 939 and 940 support the current top-of-the-line offerings from AMD and, as the names imply, they are separated by one pin. Like their Socket 754 cousins, this format of AMD processors utilizes DDR memory and provides a frontside bus of 400 MHz (200 MHz x 2). Socket 939 supports Athlon 64 processors with PR grades currently up to 4000+, as well as the Athlon 64 FX series which takes the performance to new levels for game play and other intensive applications. Socket 940 is more the business side of this pair of sockets, offering support for both Athlon 64 FX series CPUs and the
AMD Opteron class of server processors. The Opteron series of processors include a feature called dual-core, where one processor is effectively seen as two, and given the possibilities of multi-processor motherboards, the computing power can be incredible. A recent release from AMD is the Athlon 64 X2, which brings high-performance dual-core technology to a Socket 939 desktop environment, but has yet to become available to the public for purchase. Where many AMD processors used a 4 digit PR grade for marketing, the FX series of Athlon 64 processors uses a two digit code (i.e., FX-51, FX-55) and the Opteron uses a three digit code (i.e., Opteron 244, Opteron 252). All of these designations can sure make things confusing, and given the varied nomenclature that AMD alone currently uses, it can be tricky figuring out how fast a CPU really is. Final Words As CPU technology advances, the sockets that correspond to them are forced to change as well. The new physical sizes, internal architecture, and thermal properties require that the packaging evolve. A negative side effect of this development is that motherboards wind up with limited upgrade paths when it comes to adding a faster processor. Although this can occasionally be remedied with socket adaptors (http://www.powerleap.com/PowerLeapAdapters.html), as AMD and Intel continue down their parallel paths of development, this situation can be expected to continue. With that said, it looks like AMDs next CPU socket isnt that far off: (http://www.xbitlabs.com/news/cpu/display/20050430084624.html).
Gaming Gear Checklist: Most Tech Tips have focused on the business side of computer hardware, but all work and no play In this Tech Tip, we will look at some important considerations to make when selecting hardware for use in a computer that may need to work hard, but will need to play even harder. Todays fast-paced video games demand a lot of computing power and good systems even a few years old just wont cut it. Buying the video game is the easy part, but making sure you have a system that can handle it is where things may get more complicated. Far Cry (http://www.farcry.ubi.com/) is a popular action game from UbiSoft (http://www.ubi.com/) that is a perfect example of the demands placed on computers to make them run well. Their System Requirements page (http://www.farcry.ubi.com/system.php) lists the minimum specifications needed to play the game, as well as recommended specifications that will allow the game to run smoothly and look half decent. From the information provided on that page, it is clear that a computer from a few years ago may be able to play the game, but to really enjoy the game, you may need to buy more than just the game software. System Components The core system components obviously play a major role in game play. As is the case with computer performance in general, faster and bigger are what you want in processors, memory, and hard drives to enhance the gaming experience. According to the Far Cry specifications, a processor with a speed greater than 2 GHz and 512MB or more of memory are recommended. These system specifications may not be cutting edge, but they may be greater than those of many personal computers. Far Cry is just one example of many modern games requiring similar resources, and the average system just might not be up to the task. Hardcore gamers (with the appropriate budget) might not flinch at dropping a few hundred dollars on new components hoping to squeeze just a bit more performance out of their system. The technology advances so quickly that an endless cycle of upgrades is possible if you feel the need to keep up. For the sake of this article, we will assume that some of the core components in your system are there for at least the foreseeable future, and that they are at least modern enough to consider for use with video games.
Video
Video is no doubt the most important aspect to enjoying a video game. There may be many components behind the scene making sure that a crisp, clear image is provided for smooth game play, but all we care about is what is shown on the screen. The first thing to consider here is the graphics card. Taking a look back at the recommended specifications for Far Cry, it can be seen that youll want a fast graphics processor backed by 128MB (or more) of video memory. Systems using onboard video, or a PCI based video card, may do fine in desktop applications, but game play may be less than enjoyable. It used to be that 128MB of memory was a big deal, but now it is a fairly common base offering. High-end cards with 256MB or 512MB are readily available, even though some may argue that 128MB on a card with a fast processor may be enough. PCI Express video cards are the latest and greatest, and for those with motherboards that support PCIe, the extra bandwidth coupled with a high-end graphics processor will provide the best performance. Systems supporting SLI (http://www.slizone.com/content/slizone/index.html) can take things to the extreme by harnessing the processing power of a pair of matching PCIe graphics cards for use on one display. AGP cards still dominate in terms of popularity, and most chipsets found in the PCIe format will also be found in AGP format. The performance of AGP cards with the same high-end chipset as a PCIe card can be expected to be less, but still more than adequate for smooth game play. Taking a look at one manufacturers website shows that both a PCIe (http://www.rosewill.com/product/product.aspx?produ ctId=154) and AGP (http://www.rosewill.com/product/product.aspx?produ ctId=153) version of an nVidia GeForce 6600GT are available with 128MB of memory. The 6600GT PCIe card, such as this one (http://www.geeks.com/details.asp?invtid=PCIEOCT-FX6600GT128&cat=VCD) at Geeks.com, is currently quite popular with game players, as it offers excellent performance at a price that isnt too outrageous. Lets not forget the monitor. All the graphics processing power in the world is worthless without somewhere to see it. CRT monitors still dominate in terms of popularity for game players, but LCDs are making great strides. The main issue to consider with LCDs is response time, which is a figure that should be provided in the list of specifications. Presented in terms of milliseconds (ms), lower values are preferable as it indicates how quickly the image is updated. In fast-paced games, ghosting may occur on slower monitors due to the action being faster than the monitor can keep up with. Comparing this 17 TFT LCD from SVA (http://www.geeks.com/details.asp?invtid=VR-17BR&cat=MON) to this one from Princeton (http://www.geeks.com/details.asp?invtid=SEN-714R&cat=MON), shows that among other things, an extra $25 provides a response time of 16ms on the Princeton versus 25ms on the SVA. The criteria for acceptability may be subjective and relative to the game being played, the person
playing it, and other system settings, but some may argue that LCDs with a response time of 16ms or less are best suited for game play. As the technology advances, LCD monitors with response times in the single digits are starting to show up, such as the 19 Viewsonic VX924 (http://www.viewsonic.com/products/desktopdisplays/lcddisplays/xseries/vx924/) with a response time of 4ms. Audio The audio portion of video games plays a major role in the overall experience. Games are developed to take advantage of surround sound stereo audio, and the system the games are played on need to be able to share this with the user. The first step is to make sure a sound card capable of properly reproducing the sound is available. Many modern motherboards include a 5.1 channel stereo sound processor onboard, but there are PCI card upgrades available for those who need it. Budgetconscious gamers can add something like this 7.1 channel sound card (http://www.geeks.com/details.asp?invtid=A-87688C-N&cat=SND) to their system, or if they have the money for it, they can add the extreme performance and features of the 7.1 channel Creative Audigy 2ZS Platinum (http://www.geeks.com/details.asp?invtid=70SB035000003-DT&cat=SND). Once you have the sound card, you need a decent set of speakers to realistically duplicate the sounds of things like gun fire, explosions, and foot steps, as well as to indicate where the sounds are coming from. A set of surround sound speakers are necessary for distinguishing where approaching enemies are when out of your field of view, or to determine where distant gun shots are coming from. Two stereo speakers may work well enough for quietly listening to music, but they arent going to cut it for game play. A 5.1 channel, six piece set (http://www.geeks.com/products.asp?cat=SPK#6PieceSpeaker/SubwooferSet) providing two front speakers, two rear speakers, a center channel, and a subwoofer are required for a realistic gaming experience. Some may find that their neighbors dont care to share in the excitement of their latest game. For them, perhaps a set of headphones is a better investment than a set of speakers. The performance of headphones may be just as good as speakers, as some have been designed with multiple speakers to reproduce 5.1 channel stereo sound (http://www.zalmanusa.com/usa/product/view.asp?idx=1 10&code=023). Other headphones provide force feedback that actually vibrates to enhance the effect of things like explosions. These Meritline Vibra (http://www.geeks.com/details.asp?invtid=VIBRA&cat=S PK) 2 channel headphones provide such a feature, as well as including a microphone. Many multiplayer games support the use of microphones to allow team members to communicate with each other. Input Devices
The interface between the player and the computer is obviously an important one. Items such as keyboards, mice, and game controllers are all critical to ensure that a player cant blame poor performance on anything but a lack of skill. Some may say a keyboard is a keyboard, and that it cant possibly matter, but it does. Having a comfortable keyboard is the top priority, and other features may make things even more enjoyable during game play. A keyboard such as this one (http://www.geeks.com/details.asp?invtid=SIL-USBPS2-2160-WB&cat=MOU) may be desirable for two reasons. One, the backlit keys allow for easy viewing in dimly lit rooms. Lowering the lights makes the monitor appear brighter and perhaps have better contrast. Two, the multifunction keys may allow for combination commands to be programmed into one button. For those who want to get really serious with a keyboard for gaming, look into the Zboard (http://www.zboard.com/us/index.html), considered to be the ultimate gaming keyboard.
Most computer games utilize the mouse as the main control for direction and weapon selection/use, so a good mouse is obviously quite important. Being able to have smooth, precise movement is critical to getting around quickly and making sure the shot hits the mark. An old roller ball mouse jammed full of dust probably wont help, and an optical or laser mouse is the way to go. Logitech has mice that provide the precision needed, as well as ergonomically designed bodies that should remain comfortable through hours of intense game play. The MX510 (http://www.geeks.com/details.asp?invtid=931162-0403-DT&cat=MOU) is a wired optical mouse and the MX1000 (http://www.geeks.com/details.asp?invtid=931175-0403-DT&cat=MOU) is a wireless laser mouse that takes performance and comfort to the extreme. While talking about precision and smooth movement, we cant neglect the mouse pad. Performance mousing surfaces such as the X-Ray Pad (http://www.xraypad.com/) and the Maxill G-Pad (http://www.maxtill.com/eng/index.php) provide uniform surfaces in various sizes to suit any users needs. When considering games, we have to talk about game controllers. Many computer games dont need anything more than a keyboard and mouse, but many games do require specialized controllers to enhance game play. Some controllers have cloned the popular shape of controllers found on popular console gaming systems. This controller (http://www.geeks.com/details.asp?invtid=PCJOYPAD-BLUN&cat=JOY&cpc=GAM) bares a striking resemblance to a PlayStation controller, allowing those familiar with the controls on that system to be comfortable on a PC, as well. In addition there are controllers for driving games http://www.geeks.com/details.asp?invtid=NASCARPROWO&cat=JOY&cpc=GAM) and flying games (http://www.geeks.com/details.asp?invtid=TGFOX2-TR&cat=JOY&cpc=GAM), among others.
Final Words Having a computer configured to be the ultimate gaming system with all the latest and greatest hardware could easily cost several thousand dollars. Guess what? Within a few months, all of those cutting edge components will be old news, and a whole new batch of products will be available with even greater performance. But, if you are like most consumers, you have a budget and picking components that provide decent performance is possible without mortgaging the house.
Case Modding The previous Tech Tip (http://www.geeks.com/pix/techtips-02JUN05.htm) took a look at a few items that may be high on the wish list of computer game players. Although those items may make the functionality of games better, one thing that seems to go hand-in-hand with a gaming rig is a system with a unique appearance. The rising popularity of LAN parties has elevated the interest in customized computer cases, and what once was left to highly creative/skilled individuals, is now so mainstream that it might not be considered custom. Go back just a few short years and the appearances of pretty much all computers were slight variations of the same theme. Boring beige boxes dominated the industry, and people looking to express their individuality with the style of their computer had to take matters into their own hands. The term modding may not even be a real word (I know MS Words spell checker is upset with my use of it), but it is appropriate for loosely describing the broad field of modifying a computer system to give it a personalized style. This tip will proceed by looking at some of the basic items, tools, and accessories used in modding. Tools The old school case modders may be more likely to find their gear at hardware, automotive, and electronics stores than they would at a computer store. They might take a plain case like this (http://www.geeks.com/details.asp?invtid=ATX208P&cat=CAS) and turn it into something even more unique than this one (http://www.geeks.com/details.asp?invtid=CP88693&cat=CAS), with just a bit of creativity and a steady hand. For those who take pride in doing the job themselves, there are certain tools that should be included in the typical modders toolbox. The Dremel (http://www.dremel.com/) has to be considered a must-have. This high-speed rotary tool can be used for cutting, drilling, shaping, grinding, polishing, and more on anything from wood to plastic to hard steel. A Dremel can help take the most mundane case and open up new vent holes for fans, cut out a window in the side panel, or create intricate decorative cut outs (http://www.bigbruin.com/forum/viewtopic.php?p=27461#27461). A hole punch may not be as popular now that Dremels are so widely available, but they are excellent tools for punching perfect circles into thin sheet metal. For those who have access to one, these make easy work out of adding a fan hole to any case with less than ideal air flow.
A nibbler is another tool that may have lost some appeal with the popularity of the Dremel, but it is another manual tool that can be handy for cutting thin metal or plastic. Somewhat like a heavy duty pair of shears, the nibbler gets its name due to the fact that it takes small bites out of the surface in question, and allows the user to slowly cut the desired pattern. A soldering iron will come in handy for those who want their electrical modifications to be a little more robust (and tidy) than wire nuts or electrical tape alone will allow. Adding a custom lighting scheme including LEDs, switches, and other items requires running some wires, and most likely joining them in a neat and secure manner. Custom cases are a lot like hot rods, except for computer geeks. Some may have the supercharged (overclocked) engine and some serious performance hardware, but what will first grab someones attention is a sharp appearance. Paint is key to a finely finished, modded computer, just as it is to a hot rod. You can transform a case into something with classic style, into a tribute to your favorite game, or into something whimsical (http://images.gruntville.com/casegallery/Dracos-Mods/final1) with just a bit of paint. The things serious case modders can create with their tools and creativity wind up looking less like computer cases and more like art or props from a movie. Things like the Matrix Regenerator (http://www.thebestcasescenario.com/index.php?module=photoalbum&PHPW S_Album_op=view&PHPWS_Album_id=3&MMN_position=55:55) or Rebirth (http://www.thebestcasescenario.com/index.php?module=photoalbum&PHPW S_Album_op=view&PHPWS_Album_id=4&MMN_position=49:49) projects may be beyond the skills (and patience) of most, but they are awesome to see. Lighting Lighting is a key element in case modding, and LEDs may be the most common way to light up the inside and outside of a case. Individual LEDs can be purchased from electronics stores and wired together to create a unique array of color, intensity, and effects. Typical LEDs are rated with voltages and currents less than what a computer power supply might provide, and an LED calculator (http://linear1.org/ckts/led.php) is a useful application for determining what combination of LEDs and resistors might work best. For those who want their LEDs pre-configured, at this point I think just about every component may be available with LEDs in them. LEDs are like magnets to computer geeks because they cant help but be drawn to them. Some common items that feature LEDs include cooling fans (http://www.geeks.com/details.asp?invtid=D80SM-BLUE-LED&cat=FAN), fan controllers (http://www.geeks.com/details.asp?invtid=4CH-FANCTRL-SIL&cat=CAS), power supplies (http://www.geeks.com/details.asp?invtid=APGM480W&cat=CAS), mice (http://www.geeks.com/details.asp?invtid=RED-3DOPTN&cat=MOU), and other random odds and ends (http://www.geeks.com/details.asp?invtid=LCH-C-BN&cat=CAS). CCFLs may be the next main form of case lighting. Cold Cathode Florescent Lamp (CCFL) lighting is similar to typical florescent lighting in that it uses electricity to excite a gas that
produces the visible light. The cold portion implies that minimal heat is generated while still producing particularly bright light. CCFLs can output multitudes of colored light, including UV light, and generally come in tubes or coiled, like on this fan (http://www.geeks.com/details.asp?invtid=CCF120UVVL&cpc=SCH&srm=0). The Sunbeam Transformer case (http://www.geeks.com/details.asp?invtid=IC-TR-N&cat=CAS) includes a green CCFL tube to give the front grill its eerie glow. Want one more type of lighting that can be used to mod a case? Good. EL, or Electro Luminescent lighting, can be found in a variety of products, and it is distinguished by its even glow, long life, and low power consumption. Products such as keyboards (http://www.geeks.com/details.asp?invtid=SIL-USBPS2-2160-WB&cat=MOU) and case badges (http://www.geeks.com/details.asp?invtid=EL-B02-FLAME&cpc=SCH&srm=0) are just two products that take advantage of the cool lighting effect provided by this high tech, thin material. UV (Ultra Violet) is a term that people may usually associate with harmful sun rays, but to a modded case, it is the special effect for when the lights go down. UV reactive computer components, like those old Led Zeppelin posters in college, give off that freaky neon glow when subjected to a black light. Fans (http://www.geeks.com/details.asp?invtid=CCF120UVVL&cat=CAS), various cables (http://www.geeks.com/details.asp?invtid=ULT31600&cat=CBL), and other items are available to be used in conjunction with black light case lighting. Pre-Modded Cases What was once only available to those with the skills and tools to make it happen is now readily available to anyone. Granted, premodded items arent as extreme or as personalized as something undertaken from scratch, but they are more interesting than the boring beige box we all had at one time. Clear cases (http://www.geeks.com/details.asp?invtid=C LRCAS-3LED-N&cat=CAS) or themed cases (http://www.geeks.com/details.asp?invtid=C OBRA-822W-BLK&cat=CAS) may be a good point to start a modding project from, or it may be good enough for some as is. It is all up to personal taste, and the personal commitment to actually caring what your computer case looks like (if you care at all). Serious enthusiasts may look at the premodded items as being posers, but they can not deny that the popularity of these items spawned from the early days when a modded case was exactly that. Final Words Modding is all about individuality and having fun with what used to be a boring object. There is no right or wrong way to do it, and the possibilities are only limited by a persons creativity (and perhaps creative skills). Whether taking the approach of purchasing pre-modded items or starting from scratch with your power tools in hand, sources of inspiration can be found online at places such as the Gruntville Case Mod Gallery (http://www.gruntville.com/gallery_front.php), The Best Case Scenario
(http://www.thebestcasescenario.com/index.php?module=photoalbum&PHPWS_AlbumManager_ op=list), or Mini-ITX.com (http://www.mini-itx.com/projects.asp) for the small form factor enthusiasts. If you prefer your information in print rather than online, there are such much modding guides, like Going Mod: 9 Cool Case Mod Projects (http://www.geeks.com/details.asp?invtid=GOING-MOD&cat=CAS).
Tech Tip 31 - Gaming Graphics Glossary A recent Tech Tip (http://www.geeks.com/pix/techtips-02JUN05.htm) provided a look at items that may be on the wish list of some computer game players, and a solid graphics card is definitely at the top of such a list. Graphics cards, like so many other tech components, seem to require their own language to describe the functions and features they provide. This Tech Tip will take a look at a handful of terms related to graphics cards, and some more specifically related to graphics cards as used for video games. 1. Accelerated Graphics Port (AGP) AGP is one type of interface for graphics cards whose days may be numbered, but presently is the most common type out there due to its years as the number one format. According to a previous Tech Tip (http://www.geeks.com/pix/techtips-030305.htm), AGP is a dedicated, point-to-point interface that connects a video card directly to the systems memory and processor. Developed by Intel in 1996, AGP graphics cards (http://www.geeks.com/products.asp?cat=VCD#AGP-256MB) were the leaders for gaming graphics until the release of PCI Express. 2. Aliasing / Anti-Aliasing This topic was covered in a previous Tech Tip (http://www.geeks.com/pix/techtips-05MAY05.htm) related to digital cameras, and the concept is the same as it applies to gaming graphics. Aliasing is basically described as the tendency for a curved or diagonal line to appear jagged since they are composed of tiny squares, or pixels. Anti-aliasing remedies this jagged appearance through software, making images appear more smooth and natural. Video games may provide varying levels of anti-aliasing, and generally with higher levels of anti-aliasing, the overall performance of the game will be lower since more processing power is being dedicated to smoothing each image. For this reason, many graphics card reviews will show the effect on the frame rate of a game when run with different levels of anti-aliasing applied. 3. Anisotropic Filtering The definition of the word anisotropic from m-w.com states exhibiting properties with different values when measured in different directions. This is a common filtering technique applied to video games that helps improve the perspective of the image shown. Like anti-aliasing, various levels are available, and the higher the level of anisotropic filtering, the lower the overall performance of the game. Reviews may also focus on the effect of various levels of this filtering when presenting the frame rates achieved on a certain graphics card. 4. Application Programming Interface (API) A set of standard instruction that allow for video game programmers to work more efficiently by not having to recreate routine operations that may be common across many games. Some examples of APIs include Direct3D and OpenGL.
5. Artifact An artifact is any unintentional and undesirable element found in the image of a video game. Artifacts may include a flickering effect, pixels colored incorrectly, image ghosting (where a previous image is still visible in later screens), blurring, or gaps in the processing of images. Artifacts may be caused by overclocking the system (especially the graphics processor), unstable or incorrect drivers, component overheating, and other hardware or software errors. 6. Bump Mapping Bump mapping is a means of applying textures to give the 2D image on screen a more rough (or bumpy) 3D appearance. Lighting effects are used to create light and dark areas to simulate the surface of items like walls, rocks, etc. 7. Direct3D Direct3D is an API owned and developed by Microsoft for the creation of 3D games. 8. DirectX DirectX is the term given to a collection of common APIs, including Direct3D, which are owned and developed by Microsoft. 9. Digital Video Interface (DVI) DVI is an interface that allows for the transfer of a digital video signal from a computer to a display, which increases the image quality and performance over a comparable analog system. The white connection seen on the left-hand side of this graphics cards (http://www.geeks.com/details.asp?invtid=PCX-5300TM-128MTV&cat=VCD) back plate is a DVI connection. DVI is not only being used in computers, but as an interface for televisions to display high quality images from HDTV, DVD, and other digital sources. There are three levels to of DVI connectors: DVI-A (DVI Analog) this is an analog ONLY DVI connector (you don't get the benefits of the digital signal - fortunately, you really don't see these anymore); DVI-D (DVI Digital) - this is a DVI connector that ONLY puts out a digital signal; DVI-I (DVI Digital OR Analog) - this is the most common connector. It can output a Digital signal or an Analog signal. When using a DVI connector with an analog monitor (either a DVI-A or DVI-I connector), you will usually need an DVI to VGA Adapter (http://www.geeks.com/details.asp?InvtId=DVI-M-HD15F), sometimes this is provided by the video card manufacturer, but often times it is not. When most card manufacturers ads refer to their cards having a "DVI connecter," they most often mean a DVI-I connector.
10. Frame Rate The speed at which still images are generated on the screen in order to create the effect of full motion is referred to as the frame rate, which is measured in terms of frames per second (fps). While humans can generally only see 30 frames per second, many gaming benchmarks indicate that cards can provide performance far exceeding this value, and some may consider something around 60 fps the current minimum for acceptable performance. Adjusting many of the setting described in this Tech Tip will have an impact on the frame rate, and finding a balance of good performance and appearance in todays games may take some work on anything but the best graphics cards. 11. GDDR GDDR is a type of DDR (double data rate) memory produced specifically for graphics applications. Most modern graphics cards use GDDR memory to handle the demands of graphics processing, as the specialized clock speeds, bandwidth, and power requirements are more appropriate than the generally less expensive standard DDR format.
12. Graphics Processing Unit (GPU) The GPU is the processor found on a graphics card, and is the main chip for handling the work required to create the image produced on a display. 13. OpenGL OpenGL is an application programming interface that competes with Direct3D, and it is not owned by any one corporation. The open nature of this API appeals to those in favor of open source development and this type of development can lead to more frequent updates. 14. PCI Express (PCIe) PCI Express is the latest interface for connecting a graphics card to a computer system, and it is the successor to AGP in terms of gaming graphics performance. A recent Tech Tip (http://www.geeks.com/pix/techtips031005.htm) focused on PCIe and detailed the significant performance increases and flexible configurations available with PCI Express graphics cards (http://www.geeks.com/products.asp?cat=VCD#PCIExpress(PCIe)). 15. Random Access Memory Digital-to-Analog Converter (RAMDAC) RAMDACs are chips found on graphics cards that convert the digital signal received from the graphics processing unit (GPU) to an analog signal to be sent to the monitor. Digital displays can receive the unconverted signal from graphics cards capable of digital video output (via the DVI connector), and therefore do not require the additional processing provided by the RAMDAC. 16. Resolution The number of pixels displayed on the screen is referred to as the resolution, and the value is represented by the number of horizontal pixels times (x) the number of vertical pixels. Raising the resolution from 800x600 to 1600x1200, for example, will provide enhanced image quality but generally at the expense of lower frame rates. 17. Texture Mapping Texture mapping uses bitmap images stored in memory to provide the surface appearance of an object rendered in 3D. The texture is wrapped around the frame of an object, and provides a fairly simple approach for providing a complex shape. The simplicity may save processing power and provide a reasonable representation of the desired texture, but it can also lead to a chunky appearance during motion. 18. Vertical Sync (VSync) Vertical Synchronization is an option found in many games that allows the frame rate of the game to be matched to the refresh rate of the monitor. Generally, allowing VSync provides the greatest stability, but turning it off can allow for much higher frame rates. The downside of the greater speed is the potential for artifacts to develop. 19. Video Graphics Array (VGA) VGA was originally a graphics standard developed by IBM that allowed for 640x480 resolution with 16 colors. This standard has obviously been advanced to provide the greater resolutions and colors we enjoy today, but all computers support at least VGA mode. The term VGA is now mainly used to describe the 15-pin analog connection found on many graphics cards for connecting a monitor. The blue connection seen on the right-hand side of this graphics cards (http://www.geeks.com/details.asp?invtid=SE6200-N&cat=VCD) back plate is a VGA connection.
20. Video In / Video Out (VIVO) VIVO capable graphics cards can not only send a video signal out to a monitor, but they can also receive a video signal for use by the computer system. VIVO capable graphics cards, such as this one (http://www.geeks.com/details.asp?invtid=RX60X128V&cpc=SCH&srm=0), or the
Five Things to Consider When Buying a Laptop Computer Purchasing a laptop is a large investment, and one that can be complicated by all of the options, manufacturers, and technical mumbojumbo that needs to be sifted through. Before you spend a lot of money on a laptop, it is important to spend a little time considering some basics that may affect the decision-making process. This Tech Tip will take a look at five of the innumerous things worth considering when buying a laptop computer.
Ergonomics If you are going to be spending any significant amount of time working on this laptop computer, youre going to want to be comfortable. A welldesigned interface is essential for comfort, as well as good health. Carpal tunnel syndrome or tendonitis may be some of the more common conditions associated with extended computer usage, and selecting the most comfortable laptop may help avoid them all together. The keyboards on laptops generally feature compressed layouts with smaller keys, which may place a strain on hands and wrists as users try to adapt to these miniature arrangements. Generally speaking, the larger the laptop, the larger the keyboard, as they are usually designed to span the entire width of the unit. Most laptops use either a touchpad or tracking pointer (knob) as a replacement for a mouse. These may be adequate for occasional use, but even the best designs can become frustrating and uncomfortable when used extensively. Plus, when used for game play or other applications where precise motion is critical, they just dont cut it. Purchasing a separate mouse may be the best bet, as it allows you to place your arm in a more familiar (and comfortable) position, as well as providing something that may fit your hand much better. Notebook mice (http://www.geeks.com/details.asp?invtid=LM-
811-O&cat=MOU&cpc=USB) are available in a wired or wireless version, and generally feature a slightly smaller footprint than your typical mouse.
Connectivity Being able to connect to common devices is just as important on laptops as it is on desktops, but being able to add these connections down the road is not as easy on a laptop. Upgrades arent as easy on laptops due to the basic design, so make sure what you need is included up front. Wireless networking is almost a must-have feature on laptops now. The cord has been severed to every other shackle confining you to your desk; dont let network connectivity hold you back. Wireless networking adapters are available as upgrades via either PCMCIA cards (http://www.geeks.com/details.asp?invtid=DWL-650PLUS&cat=NBB) or USB adapters (http://www.geeks.com/details.asp?invtid=RU5AWGB2U&cat=NET&cpc=USB), but many now offer it onboard, hidden inside the systems housing. Integrated wireless is the best option if available, as it requires fewer accessories to carry and to configure, and leaves those expansion ports open for other uses. USB 2.0 may be the most common peripheral connection, and many laptops may still come with just one port. Thats fine if you dont mind carrying around a USB hub (http://www.geeks.com/details.asp?invtid=USB-MH20-GRY&cpc=SCH&srm=0), but the more you have to carry, the less mobile you are. A good example of the importance of USB is that many people decide they want to use a separate mouse for ergonomic reasons, and generally it will connect via USB. On a laptop with just one port, you now have to juggle the use of the mouse with connecting anything else, like a digital camera, MP3 player, or an external hard drive. Firewire may not be as popular as USB, and as such, it doesnt show up at all on many computer systems, regardless of whether they are desktops or laptops. Having this connection may not be necessary for everyone, but for those who want it, keep in mind that its inclusion on any particular laptop is not a given. Bluetooth (http://geeks.com/pix/techtips-011905.htm) is another type of connectivity you may want in a laptop, but its popularity has yet to really catch up to its hype. More and more consumer electronics devices are starting feature Bluetooth technology, but for general computer applications, it may be more trouble than it is worth. This Toshiba Satellite (http://www.geeks.com/details.asp?invtid=PSP30U-01Q001ZR&cat=NBB) features a solid assortment of connections with three USB 2.0 ports, a Firewire port, integrated wireless and wired networking, and even a modem.
Power Management If you are going to use a laptop as it was intended, away from your desk, youre going to want it to provide as much battery life as possible. The first step is to shop around for a unit that offers the best battery life possible, and then seek out independent reviews to verify this performance. A good laptop should be able to run for four hours or more on a full charge, and as the technology advances, finding units that can double this time isnt unrealistic. The operating system on most laptops will allow for the hardware to be configured to utilize the battery as efficiently as possible. It is just up to the user to navigate their way to these tabs and set things like the display to turn off, hard drives to power down, or even the processor to slow down when it isnt needed to run full speed. Not all processors can provide this speed throttling, but finding a system with a Mobile Pentium/Celeron (http://www.geeks.com/details.asp?invtid=28885RU-N&cat=NBB) or Centrino (http://www.geeks.com/details.asp?invtid=28885RU-N&cat=NBB) processor may be your best bet to ensure this capability. Another way to ensure extended life away from a power outlet is to just add a second battery. Although you can obviously carry a charged spare in your bag, some laptops allow for two batteries to be installed at once, with one generally replacing the optical disk drive.
Size / Weight All laptop computers are not created equal, and the size and weight of the various models reflect that. Some may weigh more than others due to the quantity of components included, but it may also be due to the quality of the components. Larger displays, multiple hard drives, and other integrated components will all contribute to the weight of a laptop. The largest single source of weight in a laptop may be the battery, and systems with two batteries as described above, should be expected to be much heavier. No laptop may be considered heavy in the grand scheme of things, but just a few pounds more may be noticeable if you regularly have to lug it through a busy airport or across a large college campus. Geeks.com may not provided the exact weight of each laptop they carry, but they do provide a shipping weight for each, which is a good approximation of what the laptop and various accessories will weigh when loaded into your carrying bag. The overall size of a laptop is generally governed by the size of the display included. You may have seen the commercial where Yao Ming (75 basketball player) and Verne
Troyer (Mini-Me) compare their laptops with 12 inch and 17 inch monitors. It is an excellent demonstration of the range of sizes available in laptop computers, and how the various sizes may be appropriate for different users. Those seeking a replacement for their desktop computer may insist on a 17 display, while those seeking to minimize size and weight in the name of portability may be willing to select a laptop with a smaller display.
Future Proof Basically, purchase as much laptop as you can afford, so that a year or two down the road you may be less likely to need a replacement. Processors in a laptop are generally not upgradeable, or at least quite difficult to upgrade, so picking something with marginally adequate speed for todays needs will no doubt be obsolete sooner than you might expect. Desktop computers generally offer the convenience of having their processors (and other components) upgraded, making it less of an issue, but it is important to plan ahead with laptops, or to plan on buying another one in a few years. The graphics processor is another integrated feature that should be considered before making a purchase, as there is no upgrading. Many laptops may offer somewhat basic graphics intended for good 2D display and 3D displays that may be hit or miss as far as the quality is concerned. In general, laptops were never intended for 3D gaming, but things are changing and many manufacturers now offer higher performance graphics solutions that can rival many desktop computers. ATI is well known for their high performance graphics products, and offer the Mobility Radeon X series (http://www.ati.com/products/mobile.html) of graphics processors based on their popular desktop solutions. Laptop memory (http://www.geeks.com/products.asp?cat=RAM#200pinDDRNotebookMemory) is less of a bottleneck, as it is readily available and can be upgraded rather easily. That said, many notebooks offer a base configuration of memory that may not be adequate for your particular needs. It is suggested that a Windows XP system have a minimum of 256MB of memory (http://www.bigbruin.com/reviews05/memorybuy/index.php?file=2), and you may find that this is what is offered on many systems. 512MB is the recommended amount of memory for smooth operation on Windows XP, and many users with more intensive applications to run may insist on 1024MB. If you intend to run serious business applications or want to play some modern 3D games, it may be worth having that base 256MB upgraded before the laptop ships to you. Final Words Picking a laptop computer will probably be more involved than reviewing five simple steps, but you have to start somewhere! Each of these steps will hopefully guide other decisions and make the process less frustrating, while also leading to the selection of the best laptop possible.
5 Ways to Block Spam Spam is one of those things that nobody wants, but probably has plenty of. If there happens to be anyone out there unfamiliar with spam, we are not talking about the luncheon meat, but the unsolicited, junk e-mail that clogs our inboxes. And in case you are curious, according to some sources, the junk mail version of spam earned its name from a Monty Python skit regarding the luncheon meat of the same name. Care to sing along? From offers for prescription drugs, to mortgage refinancing, to sexually explicit content, spam can leave us having to sift through mounds of trash to find the few messages we actually care to read. Although eliminating all junk e-mail may be impossible, there are several steps than can be taken to all but eliminate spam from your inbox. 1. Protect Your E-mail Address One of the best strategies for avoiding spam is to protect your personal e-mail address. Your best defense is for the spammers to not even know you exist, but this is a difficult task to accomplish. Many spam mailing lists are created by harvesting e-mail addresses from websites where your information may be displayed. Newsgroups, bulletin boards, and chat rooms are just a few examples of places where spammers may run scripts to collect anything that resembles an e-mail address. Many sites, such as bulletin boards, have safeguards to protect their members, but it does nothing if these members post their personal information in one of their posts, their signature, or somewhere else that puts the information in plain sight. In addition, signing up with unknown sources for online contests, mailing lists, and similar occasions where you need to provide an address as part of the registration process may also expose your address to spammers. Using your best judgment is your best defense. If you want to keep your mailbox clean, keep your address private, only giving it out to trusted parties. 2. Create a Spam E-Mail Account Protecting your e-mail address is easier said than done, and if you find that it is impossible to keep your personal e-mail address completely private, a separate account may be the solution. Referred to by some as a throw-away account, this e-mail account doesnt have to cost you anything, as suitable e-mail accounts are available for free from places such as Hotmail and Yahoo. This throw-away account is the best choice when you are unsure that your privacy will be protected. Use it when registering with newsgroups, bulletin boards, sweepstakes, or in any other situation where youre not quite sure your privacy will be protected. You have to use your better judgment, as signing up for something from a trustworthy source, like the Computer Geeks mailing list, is much different than many things well just leave to our imaginations. Since you are not expecting any important mail at this account, if it becomes over run with spam, you do just as the name suggests and throw it away for a new one. 3. Message Rules in Outlook / Outlook Express Most people use either Outlook or Outlook Express as their e-mail client, but all of these people may not be familiar with creating message rules in the Tools drop down menu. Rules allow you to manually filter the delivery of e-mail, and can be created to analyze the senders name, subject line, and message body before processing. For example, a rule can be created so
that any message with a particularly offensive word in the subject line is automatically moved to the Deleted Items folder, or even better, just deleted from the server before download. Another option provided by Outlook and Outlook Express allows the user to add senders to their Blocked Senders list. No rule needs to be created, and in a few clicks, a sender of unsolicited e-mail can be added to your personal blocked senders list. Whenever mail arrives from this sender in the future, it will skip the inbox and go straight to the Deleted Items folder. Windows XP with Service Pack 2 provides even greater security in a variety of areas, including Outlook and Outlook Express. Many spam e-mails have images in the body that are coded to identify receipt of the e-mail. If the individually coded image has been viewed, the spammer knows that you have seen the e-mail, thus confirming your address as valid. With SP2, images are blocked to prevent your computer from being identified, thus keeping the spammer from confirming they have a valid address to continue mailing. 4. Third Party Software There are numerous applications available for purchase, or as free downloads, specifically intended to filter spam as it enters your inbox. These programs identify telltale signs of a spam message by analyzing hidden tags in the message, use of text and images in the message, and various other clues available that point to a message being unwanted. A few examples of spam filtering software is available from these three companies; SPAMfighter, MailWasher, and Cloudmark. Each offers its own twist on the interface and manageability, but they all allow users to take control of the spam in their Outlook or Outlook Express mailboxes. The price tag on this type of software may involve a one time fee of $30 or more, and some come with annual subscriptions costing up to $40. If the free software doesnt cut it for your tastes, these pay versions generally include a free trial so that you can be sure the program is right for you before you spend any money. The logic and data behind the spam filtering is constantly evolving, so these packages need to be kept updated, much like a virus scanning application, and this is where subscription-based offerings come into play. 5. Server Based Solutions Most major internet service providers (ISP) now offer a spam filter as part of the package offered to its subscribers. AOL and Earthlink are just two of the big names out there that include a spam filter in with other attractive features like virus protection and pop up blockers. These ISP provide filters which effectively manage spam at the server before delivery, but they are generally not overly customizable on the end user level, and they obviously only protect e-mail accounts provided by the ISP. Protection similar to what an internet service provider offers can be implemented by just about anyone with their own domain name, and access to their server. Domain names and web hosting have become so cheap that it is not all that uncommon for people to have their own website, or at least a domain name for e-mail. SPAM Assassin is a no-cost, server based spam fighting solution that can be installed on a server, and has become a common feature included on many web hosting packages. These solutions use various rules and logic to analyze messages, much like the third party software does, but it all happens at the server level. This keeps the message from having to be downloaded to be processed, thus saving time and precious bandwidth. Final Words Spam is a nuisance that impacts people on several levels. Even if the content is not inappropriate or offensive, it is a waste of time and money. Although some spam solutions claim to eliminate 100% of all unsolicited email, my experience tells me that this just isnt realistic. That said, protecting your e-mail address and implementing the appropriate spam filtering solution should nearly eliminate spam from your life.
Windows Hot Keys: Most people think they know the ins and outs of using their favorite software (and maybe they do), but there are hundreds of little shortcuts that can be used to make common tasks even easier. This Tech Tip is going to follow a different format than the norm and will list a few dozen of these hot keys that can be used to make working in Microsoft Windows even easier. The shortcuts covered are broken up into groups based on the main key involved in activating them. So, lets take a look at what we can do with the ALT, CTRL, SHIFT, and Windows Keys, as well as a few combo moves General Keyboard Shortcuts CTRL+C (Copy) CTRL+X (Cut) CTRL+V (Paste) CTRL+Z (Undo) DELETE (Delete) SHIFT+DELETE (Delete the selected item permanently without placing the item in the Recycle Bin) CTRL while dragging an item (Copy the selected item) CTRL+SHIFT while dragging an item (Create a shortcut to the selected item) F2 key (Rename the selected item) CTRL+RIGHT ARROW (Move the insertion point to the beginning of the next word) CTRL+LEFT ARROW (Move the insertion point to the beginning of the previous word) CTRL+DOWN ARROW (Move the insertion point to the beginning of the next paragraph) CTRL+UP ARROW (Move the insertion point to the beginning of the previous paragraph) CTRL+SHIFT with any of the arrow keys (Highlight a block of text) SHIFT with any of the arrow keys (Select more than one item in a window or on the desktop, or select text in a document) CTRL+A (Select all) F3 key (Search for a file or a folder) ALT+ENTER (View the properties for the selected item) ALT+F4 (Close the active item, or quit the active program) ALT+ENTER (Display the properties of the selected object) ALT+SPACEBAR (Open the shortcut menu for the active window) CTRL+F4 (Close the active document in programs that enable you to have multiple documents open simultaneously) ALT+TAB (Switch between the open items) ALT+ESC (Cycle through items in the order that they had been opened) F6 key (Cycle through the screen elements in a window or on the desktop) F4 key (Display the Address bar list in My Computer or Windows Explorer) SHIFT+F10 (Display the shortcut menu for the selected item) ALT+SPACEBAR (Display the System menu for the active window) CTRL+ESC (Display the Start menu) ALT+Underlined letter in a menu name (Display the corresponding menu) Underlined letter in a command name on an open menu (Perform the corresponding command) F10 key (Activate the menu bar in the active program) RIGHT ARROW (Open the next menu to the right, or open a submenu) LEFT ARROW (Open the next menu to the left, or close a submenu) F5 key (Update the active window)
BACKSPACE (View the folder one level up in My Computer or Windows Explorer) ESC (Cancel the current task) SHIFT when you insert a CD-ROM into the CD-ROM drive (Prevent the CD-ROM from automatically playing) Dialog Box Keyboard Shortcuts If you press SHIFT+F8 in extended selection list boxes, you enable extended selection mode. In this mode, you can use an arrow key to move a cursor without changing the selection. You can press CTRL+SPACEBAR or SHIFT+SPACEBAR to adjust the selection. To cancel extended selection mode, press SHIFT+F8 again. Extended selection mode cancels itself when you move the focus to another control. CTRL+TAB (Move forward through the tabs) CTRL+SHIFT+TAB (Move backward through the tabs) TAB (Move forward through the options) SHIFT+TAB (Move backward through the options) ALT+Underlined letter (Perform the corresponding command or select the corresponding option) ENTER (Perform the command for the active option or button) SPACEBAR (Select or clear the check box if the active option is a check box) Arrow keys (Select a button if the active option is a group of option buttons) F1 key (Display Help) F4 key (Display the items in the active list) BACKSPACE (Open a folder one level up if a folder is selected in the Save As or Open dialog box) Microsoft Natural Keyboard Shortcuts Windows Logo (Display or hide the Start menu) Windows Logo+BREAK (Display the System Properties dialog box) Windows Logo+D (Display the desktop) Windows Logo+M (Minimize all of the windows) Windows Logo+SHIFT+M (Restore the minimized windows) Windows Logo+E (Open My Computer) Windows Logo+F (Search for a file or a folder) CTRL+Windows Logo+F (Search for computers) Windows Logo+F1 (Display Windows Help) Windows Logo+ L (Lock the keyboard) Windows Logo+R (Open the Run dialog box) Windows Logo+U (Open Utility Manager) Accessibility Keyboard Shortcuts Right SHIFT for eight seconds (Switch FilterKeys either on or off) Left ALT+left SHIFT+PRINT SCREEN (Switch High Contrast either on or off) Left ALT+left SHIFT+NUM LOCK (Switch the MouseKeys either on or off) SHIFT five times (Switch the StickyKeys either on or off) NUM LOCK for five seconds (Switch the ToggleKeys either on or off) Windows Logo +U (Open Utility Manager) Windows Explorer Keyboard Shortcuts END (Display the bottom of the active window) HOME (Display the top of the active window) NUM LOCK+Asterisk sign (*) (Display all of the subfolders that are under the selected folder) NUM LOCK+Plus sign (+) (Display the contents of the selected folder) NUM LOCK+Minus sign (-) (Collapse the selected folder) LEFT ARROW (Collapse the current selection if it is expanded, or select the parent folder) RIGHT ARROW (Display the current selection if it is collapsed, or select the first subfolder)
Shortcut Keys for Character Map After you double-click a character on the grid of characters, you can move through the grid by using the keyboard shortcuts: RIGHT ARROW (Move to the right or to the beginning of the next line) LEFT ARROW (Move to the left or to the end of the previous line) UP ARROW (Move up one row) DOWN ARROW (Move down one row) PAGE UP (Move up one screen at a time) PAGE DOWN (Move down one screen at a time) HOME (Move to the beginning of the line) END (Move to the end of the line) CTRL+HOME (Move to the first character) CTRL+END (Move to the last character) SPACEBAR (Switch between Enlarged and Normal mode when a character is selected) Microsoft Management Console (MMC) Main Window Keyboard Shortcuts CTRL+O (Open a saved console) CTRL+N (Open a new console) CTRL+S (Save the open console) CTRL+M (Add or remove a console item) CTRL+W (Open a new window) F5 key (Update the content of all console windows) ALT+SPACEBAR (Display the MMC window menu) ALT+F4 (Close the console) ALT+A (Display the Action menu) ALT+V (Display the View menu) ALT+F (Display the File menu) ALT+O (Display the Favorites menu) MMC Console Window Keyboard Shortcuts CTRL+P (Print the current page or active pane) ALT+Minus sign (-) (Display the window menu for the active console window) SHIFT+F10 (Display the Action shortcut menu for the selected item) F1 key (Open the Help topic, if any, for the selected item) F5 key (Update the content of all console windows) CTRL+F10 (Maximize the active console window) CTRL+F5 (Restore the active console window) ALT+ENTER (Display the Properties dialog box, if any, for the selected item) F2 key (Rename the selected item) CTRL+F4 (Close the active console window. When a console has only one console window, this shortcut closes the console) Remote Desktop Connection Navigation CTRL+ALT+END (Open the Microsoft Windows NT Security dialog box) ALT+PAGE UP (Switch between programs from left to right) ALT+PAGE DOWN (Switch between programs from right to left) ALT+INSERT (Cycle through the programs in most recently used order) ALT+HOME (Display the Start menu) CTRL+ALT+BREAK (Switch the client computer between a window and a full screen) ALT+DELETE (Display the Windows menu) CTRL+ALT+Minus sign (-) (Place a snapshot of the entire client window area on the Terminal server clipboard and provide the same functionality as pressing ALT+PRINT SCREEN on a local computer.) CTRL+ALT+Plus sign (+) (Place a snapshot of the active window in the client on the Terminal server clipboard and provide the same functionality as pressing PRINT SCREEN on a local computer.)
Microsoft Internet Explorer Navigation CTRL+B (Open the Organize Favorites dialog box) CTRL+E (Open the Search bar) CTRL+F (Start the Find utility) CTRL+H (Open the History bar) CTRL+I (Open the Favorites bar) CTRL+L (Open the Open dialog box) CTRL+N (Start another instance of the browser with the same Web address) CTRL+O (Open the Open dialog box, the same as CTRL+L) CTRL+P (Open the Print dialog box) CTRL+R (Update the current Web page) CTRL+W (Close the current window) Final Words Windows hot keys are all intended to provide some sort of convenient alternative to common tasks, and whether specific combinations do so is up to the individual to decide. Some are simple time-saving motions, while others are complex maneuvers in finger gymnastics. There are dozens of other common Windows shortcuts (and even more related to specific software titles), and memorizing just a few of the more basic ones may be worth the time-savings they can afford you.
Tech Tip 35 - Laptop Accessories The basic design of laptops makes them the obvious choice for those who need their computing to be mobile. All of the key components (and then some) of a desktop computer can easily be configured into a minimal housing that goes with you just about anywhere. That said, there are some things you may want to add to your laptop computer in order to make it more comfortable to use, more convenient, and to extend its life while enduring the strains of daily use. This Tech Tip will take a look at five items that any laptop owner may want to take a look at in order to enhance their experience. Carrying Case
Laptop computers are made to be portable, but for the most part, they are not designed as indestructibly as they may need to be in order to survive the bumps and bruises of traveling. Most laptops include a carrying case of some sort, but many dont seem adequate enough to protect the valuable contents, and many others dont have the capacity to hold much more than the basic essentials. Finding a carrying case that will not only protect a laptop, but that will also hold all of the necessary accessories, and perhaps your other items (camera, digital audio device, etc), may be worth while. It seems that on every flight I take I see at least one person come down the planes aisle with a laptop case slung over their shoulder allowing it to bounce off every seat they pass! How long can that last? Cases (http://www.geeks.com/products.asp?cat=NBB#NotebookCases) in a variety of sizes, styles, and materials are available to replace the one the laptop manufacturer was kind enough to throw into the deal. Whether you need the strength of a hard metal case, or the refined style of saddle leather, finding an appropriate case can be an important and worthwhile investment. Mouse
Laptop computers all come with some sort of pointing device built in. Generally, you either get a touch pad located just below the keyboard, or a tracking pointer (a small knob) located in the middle of the keyboard. These may suit your needs just fine for occasional use, but for extended use they may not be the best ergonomic solutions. Adding a mouse to a laptop is simple and can greatly increase the comfort level, productivity and enjoyment. You can obviously use a standard (USB or PS/2) mouse with a laptop, but space constraints may make a smaller device more appropriate. Miniature mice are available in wired (http://www.geeks.com/details.asp?invtid=BLU-USBMINIMOUSE&cat=MOU) and wireless (http://www.geeks.com/details.asp?invtid=931171-0403-DT&cat=MOU) versions, and the reduced size may be appreciated given the generally limited space available in a carrying bag. The reduced size may pose a concern to those with large hands, or those who have gotten used to a larger mouse, but there should be a mouse out there to suit anyones taste. This Bluetooth-enabled miniature mouse (http://www.geeks.com/details.asp?invtid=GME225BDT&cat=MOU) could be great for use with a laptop computer, especially one with Bluetooth built into the system. That way, no external adaptor would need to be plugged in, and only the mouse would have to travel with you. Card Reader Card readers (http://www.geeks.com/products.asp?Cat=CAM#CameraMemory CardReaders) have become fairly common devices, and prove to be extremely handy whether the computer is a desktop or a laptop. With the rising popularity of digital cameras, digital audio/video devices, personal digital assistants, and the handful of other items that use flash memory, having a central location to access these memory cards is a good idea. Desktops afford the convenience of having the card reader built in, but for the most part, laptop users will need to use an external USB device. A reader capable of accessing 7, 12, or even a higher number of flash memory formats can be had in a compact device that wont consume much space in your carrying case. And for even greater space savings, perhaps you can kill two birds with one stone and go for something like this USB mouse / card reader combo (http://www.geeks.com/details.asp?invtid=KMOUSE-SDMSMMCN&cat=CAM). Cooling
Cool electronics are happy electronics, and a laptop is no different. Todays computer chips can run extremely hot, and given the minimal space a laptop takes up, there isnt room for optimal cooling solutions. The heat created by the processor, hard drive, graphics processor, and so on are all going to radiate through the computers housing, as well as being blown out the exhaust vent. Keeping these components as cool as possible can not only extend their lives, but the user will be comfortable as well.
A notebook cooler, such as this one (http://www.geeks.com/details.asp?invtid=SIL-NB03&cat=NBB), connects via USB to power fans that cool the bottom of the laptop. This will not only keep the internal components of the laptop cool, but can also keep the user more comfortable by reducing the temperature felt by their hands, and perhaps lap. Another comfort feature experience with some laptop coolers is the inclusion of a slight slope that allows the keyboard to be angled towards the user in a more ergonomic position. Some of these coolers may also feature additional USB plugs for connecting other devices, making them even more convenient, and a bit like the item to be discussed in the next section. Docking Station
If you have a laptop computer that does a bit of traveling, but still does most of its work in one location, a docking station (http://www.geeks.com/products.asp?cat=NBB#NotebookDockingStations) may be worth a look. What these devices do is allow for the various connections to the computer to be made to the docking station, and then just one connection is made between the laptop and the station whenever you want to use it in that location. So, you could leave anything like a mouse, keyboard, second monitor, power cord, network cable, and speaker wires connected to the docking station, and simply connect the computer when it is time to work. For those who regularly use their laptop in a desktop setting, this makes setup much easier than connecting all of these individual devices each time. Final Words Laptops are great devices that make conducting business away from the office simple and effective. Adding a few key accessories to a laptop can help take things to the next level in terms of convenience and comfort, as well as insuring that your precious laptop is ready for just about anything your travels may throw its way.
Tech Tip 36 - SCSI Basics The common computer utilizes either ATA or SATA hard drives, as was discussed in this previous Tech Tip (http://geeks.com/pix/techtips-010605.htm). There is another standard for connecting hard drives which doesnt find its way into too many personal computers, but is quite prominent in servers and high-end work stations: SCSI. SCSI stands for Small Computer System Interface, and if you dont want to pronounce each letter individually, its OK to call it skuzzy. SCSI, like ATA (Advanced Technology Attachment) or SATA (Serial Advanced Technology Attachement), can be used for connecting more than just hard drives to a computer system, and some of the other peripherals that can support SCSI include tape drives, optical drives, printers, and scanners. This Tech Tip will take a look at a few basic features of SCSI, mostly as related to hard drives, and how ATA and SATA drives may compare. The Basics The SCSI standard was first introduced in 1986 (the same year the ATA standard was released), and significant advancements have been made to it over the years in areas such as speed, bus width, bus speed, and the number of devices that can be connected. An adaptor card, also called a host adaptor, is required for connecting SCSI drives to the motherboard, but this serves more like a gateway for data transfer, rather than a processing center. The SCSI controller allows system resources to remain freed up during heavy data processing because it is the individual drive controllers doing the bulk of the work. In addition, individual SCSI drives can communicate directly, requiring almost no CPU power, while ATA or SATA drives must all rely on the system to provide the processing for such communications. This becomes more important when considering that a single SCSI adaptor can support up to 15 drives (or other devices), which could overwhelm one controller if it had to manage the communications for all of them. While discussing the means by which the various drives connect to a system, lets look at the physical connections. Headers (pin connection blocks) can be found onboard modern desktop
motherboards to support the 40-pin ATA connector and/or the 7-pin SATA connector. Due to SCSIs more specialized nature, only high-end motherboards may have built in adaptors, and depending on the age and type, the header may have 25, 50, 68, or 80 pins. Stand-alone SCSI adaptors are available for PCI or PCI-X slots (http://www.geeks.com/pix/techtips-022405.htm), and can be selected to match the drives on hand. The cables (http://www.geeks.com/products.asp?Cat= HDD#SCSICables) required to connect SCSI drives are also different, not just because of the number of pins used to connect them, but because you can have so many drives on one channel. Cables can be chained together to add more drives to a SCSI channel, and in order to let the channel know where the end of the chain is, a device called a terminator must be installed at the end of the line. This cable (http://www.geeks.com/details.asp?invtid=U L20674&cat=HDD) features 3 connectors for Ultra160 SCSI drives and includes a removable terminator. In order for all of the devices on a SCSI bus to be identified by the system, there is a set of jumpers or switches found on each drive. Each drive on the bus must have its jumpers configured so that it has a unique value, or SCSI ID, which would translate to a number between 0 and 15 on a system capable of 16 devices. Performance The SCSI standard released in 1986 (SCSI-1) was a parallel interface that allowed for data transfer at a rate of 5 MBps on an 8 bit wide, 5 MHz bus. One controller channel was capable of connecting up to 8 devices. The latest standard, Ultra320 SCSI, is still a parallel interface that now supports data transfers up to 320 MBps on a 16 bit wide, 40 MHz bus, and one channel on an adaptor is capable of connecting up to 16 devices (generally, 1 adaptor and 15 drives). Lets compare this to ATA and SATA. The latest (and last) ATA standard, ATA-133, is a parallel interface supporting data transfers up to 133 MBps on a 16 bit wide, 33 MHz bus, with one channel capable of connecting 2 devices. SATA is in a transitional stage as the SATA-300 standard is just now becoming commercially available to challenge the popularity of the SATA150 standard. SATA-150 is a serial interface supporting data transfers up to 150 MBps on a 1 bit wide bus, where one channel generally supports one device (some controllers can allow multiple devices on one channel with degraded performance). The SATA-300 standard maintains the majority of the original features, but the
maximum transfer rate is now doubled to 300 MBps. Regardless of drive type, real-world performance never equals theoretical maximum values, but higher specifications imply higher potential real-world performance. Even with the latest SATA standard doubling its speed, it is easy to see that the more established Ultra320 SCSI standard has a sizeable edge in transfer rates (320 MBps > 300 MBps, ), in addition to the other factors that make SCSI so robust. By 2008, SATA throughput rates are expected to reach 600 MBps, but time will tell. Another speed comparison can be made between the drives in terms of how fast the disk pletters spin. ATA and SATA drives generally spin at a maximum of 7200 RPM (some SATA drives now go up to 10,000 RPM), while it is standard for a SCSI drive to operate at 10,000 or 15,000 RPM. Higher rotational speeds aid in lowering times to access data, as well as when reading and writing. Price Because the actual controller is part of the drive itself, the price of a modern SCSI drive is a great deal more than either an ATA or SATA hard drive of a comparable size. Using the inventory at Geeks.com as an example, you can see that even much higher capacity ATA/SATA drives are a fraction of the cost of a SCSI drive. A 120GB ATA133 Maxtor drive (http://www.geeks.com/details.asp?invtid=4R120L0 &cat=HDD) costs $64, a 120GB SATA-150 Maxtor drive (http://www.geeks.com/details.asp?invtid=6Y120M 0-N&cat=HDD) costs $100, and this 73GB Maxtor Ultra-320 SCSI drive (http://www.geeks.com/details.asp?invtid=8D073J0-DT&cat=HDD) costs a significant $287. In addition to the base price of the drive costing a good deal more, other factors such as a controller card, cables, and a terminator (http://www.geeks.com/details.asp?invtid=SCATOSCSITERM&cpc=SCH&srm=0) can add even more to the setup. Most ATA or SATA-based systems come with the controller built in, and for the most part the cables are also included or available for next to nothing (http://www.geeks.com/details.asp?invtid=MBBPLUS&cpc=SCH&srm=0). A controller built in to every drive contributes to the cost, but there is more to it than that. Reliability One of the key reasons for SCSIs higher price is reliability. SCSI drives are built to a much higher standard than typical ATA or SATA drives, and that doesnt come cheap. A typical SCSI drive may be specified with a Mean Time Between Failure (MTBF) of up to 1.5 million hours, while a typical SATA drive may have a MTBF of less than 1 million hours, sometimes much less. Referencing the Maxtor drives mentioned previously, the specifications on the SCSI drive show a MTBF of 1.4 million hours, while a fairly extensive search of Google and Maxtors site couldnt turn up a value for these ATA or SATA drives, but
typical desktop hard drives are rated at approximately 500,000 hours. SCSI drives are expected to always be on, used in environments where 24/7 operation and uptime are not only necessary, but critical. The typical ATA or SATA drive is intended to be on for only about 8 hours per day. Your wallet might not agree, but the typical hard drive found in a personal computer is pretty cheap, and it is designed to be so. Final Words SCSI may not be an economical solution for a desktop computer, but it doesnt pretend to be. The high price tag comes with equally high performance and reliability, and in critical server and workstation applications, the added expense may easily be justified and recovered in a short period of time.
Tech Tip 37 - Memory Basics All computers require memory to operate, but understanding the different types and how much you should have can be an issue. This Tech Tip will take a look at some of the common forms of modern computer memory and the different features they bring to the table. The Basics Before looking at a few of the specific types of memory, lets cover some of the basics that pertain to all of them. Memory, or more specifically Random Access Memory (RAM), is a type of computer storage that allows the information to be accessed in any order. This allows for quick access to data that programs need to operate. The contrasting format would be sequential access memory, such as a tape drive, that forces the system to go through all preceding data to get to the piece it wants. As computers and software have advanced, the memory requirements have also changed. Most users have a version of Microsofts Windows as their operating system, and each carries a unique set of memory requirements. Microsofts website recommends (http://www.microsoft.com/windowsxp/pro/evaluation /sysreqs.mspx) a minimum of 128MB for the popular Windows XP operating system, but many other sources will argue that 256MB is the minimum and that 512MB is a better number to shoot for. Older versions of Windows required far less memory, with Windows 98 doing well with a minimum of 64MB and Windows 2000 getting by with 128MB. A minimum number may be misleading, as it may allow the system to operate, but perhaps not perform anywhere near its optimum capability. Those with Linux-based computers may experience successful installations with less memory than a typical Windows operating system, but Penguin power requires a certain amount of memory too. As little as 16MB of memory can successfully power a Linux system with a command line interface, and 64MB may be recommended for adding one of the common graphical interfaces. Some of the various distributions will recommend even more memory, such as Xandros (http://www.xandros.com/) recommending 128MB, Linspire (http://www.linspire.com)
recommending 256MB (128MB minimum), and the CentOS (http://www.centos.org) forum mentions 256MB. SDRAM SDRAM, or Synchronous Dynamic Random Access Memory, is a few generations old at this point, but may still be found in a good number of computers. This type of memory is/was available in approved speeds of 66 MHz, 100 MHz, and 133 MHz, and was sold based on these speeds ratings, ie. PC66, PC100, and PC133. SDRAM for desktop computers (http://www.geeks.com/products.asp?cat=RAM#168-pinDIMMMemory) features 168 pins for electrical/data transfer on a module measuring roughly 5 long. SDRAM for notebook computers (http://www.geeks.com/products.asp?cat=RAM#144-pinNotebookMemory) features 144 pins for electrical/data transfer on a module measuring roughly 2 5/8 long. SDRAM could be found in early Intel Pentium and AMD K6/Athlon systems.
SDRAM was a big improvement over previous generations of computer memory, as the memory and processor were now synchronized and data was available as needed. Later generations of system memory (DDR and DDR-2) are built off of the foundation laid by SDRAM, while obviously adding more speed and greater performance. RIMM RIMM (Rambus Inline Memory Module), also known as Rambus or RDRAM, was a format launched by Rambus (http://www.rambus.com) as a successor to SDRAM. Desktop RIMM modules (http://www.geeks.com/products.asp?cat=RAM#184-pinRIMMMemory) feature 184 pins for electrical/data transfer on a module measuring roughly 5 1/4 long. The rating for RIMM memory is based on the maximum theoretical bandwidth (in MHz) and included speed ratings of 800 MHz, 1066 MHz, 1200 MHz, 1333 MHz, and 1600 MHz.
Early Intel Pentium 4 processors adopted the technology, but that was about the extent of RIMMs desktop popularity. Some server applications and home electronics devices (such as the PlayStation II) also utilize RIMM memory, but DDR memory was launched at about the same time and eventually stole the show.
DDR DDR, or Double Data Rate SDRAM, was the follow up to SDRAM which is still in use today. All present AMD-based systems utilize DDR memory, and some Intel-based systems still use it (despite most being transitioned to DDR-2). The Double part of DDR comes from its ability to transfer twice the data of an SDRAM module operating at the same speed. This is accomplished by the fact that DDR technology can send data on both the rise and the fall of a clock pulse, while SDRAM only sends data on the rise.
DDR is marketed much like RIMM, as it uses its maximum theoretical bandwidth (again in MHz) to describe the various speeds available. Standard speeds of DDR include PC1600, PC2100, PC2700, and PC3200. The bandwidth can be tied directly to a memory clock speed, with the following correlation: PC1600 100 MHz, PC2100 133 MHz, PC2700 166 MHz, and PC3200 200 MHz. Many times, these speeds are referenced by a DDR rate instead of these straight clock speeds, so PC3200 would actually be called 400 MHz DDR, for example. Seen in other memory types, but perhaps most prominent in DDR, are specifications for modules operating at speeds other than the official ones listed above. Memory standards are governed by a group called JEDEC (http://www.jedec.org), but manufacturers can design products outside of these specifications for computing enthusiasts. This non-standard DDR may be capable of much higher speeds, and products carrying ratings such as PC4000 or PC4400 are readily available. DDR memory for desktop computers (http://www.geeks.com/products.asp?cat=RAM#184pinDDRDIMMMemory0) features 184 pins for electrical/data transfer on a module measuring roughly 5 1/4 long. DDR for notebook computers (http://www.geeks.com/products.asp?cat=RAM#200-pinDDRNotebookMemory) features 200 pins for electrical/data transfer on a module measuring roughly 2 5/8 long. You may have noticed that a module of DDR and a module of SDRAM have the same lengths. In order to prevent a user from installing the wrong type of memory in their system, the modules are notched differently to act as a key. SDRAM features 2 notches, while DDR features 1 notch at a different location. DDR-2 DDR-2, or Dual Data Rate Two SDRAM, is the second generation of DDR memory and is just now reaching a price and performance level to make it more viable for mainstream computer systems. DDR-2 provides almost double the (theoretical) data transfer as DDR, but it still sends data on the rise and fall of the clock pulse. The improvements are achieved through an increased number of memory buffers, lower electrical consumption, improved physical design, and an improved prefetch. The problem with most present DDR-2 is that these improvements are wiped out by a higher latency within the memory, and the actual improvements over DDR at the same
DDR-2 uses a similar naming structure to DDR, in that the maximum theoretical bandwidth is the typical method of describing a module. Instead of just a PC prefix, we now have a PC2 to describe modules such as PC2-3200, PC2-4200, and PC2-5300. PC2-3200 has a DDR-2 speed of 400 MHz (4x100 MHz), PC2-4200 has a DDR-2 speed of 533 MHz (4x133 MHz), and PC25300 has a DDR-2 speed of 667 MHz (4x166 MHz). As with DDR (and others), overclocking memory is available in DDR-2, such as Corsairs (http://www.corsair.com/) DDR-2 PC2-8000, which operates at 1000 MHz! DDR-2 for desktop computers (http://www.geeks.com/products.asp?Cat=RAM#240pinDDR2DIMMMemory) features 240 pins for electrical/data transfer on a module measuring roughly 5 1/4 long. DDR-2 for notebook computers features 200 pins for electrical/data transfer on a module measuring roughly 2 5/8 long. Like DDR, DDR-2 is keyed with one notch (located at a different position than the one DDR notch) to prevent using the wrong type of memory. Final Words Memory is an essential component in any computer system, and as with most things, bigger and faster are always a good thing. But, if your budget doesnt allow for a few GBs of the highest performance modules out there, having the appropriate amount of good quality memory can make a tremendous impact on system performance, reliability, and user happiness.
Addendum: The use of canned air products comes with a responsibility to use these products as directed and limit their access to children and teenagers. We feel that the safety of our children and young people is of the utmost importance in our society. Thanks to all of our Tech Tip readers who wrote and asked us to point out the potential dangers of canned air and similar products. Tech Tip 38 - Canned Air Computer Maintenance Many people dont think of their computer when doing a bit of cleaning around the home, but perhaps they should. Were talking about an effort far less unpleasant than doing windows or cleaning the bathroom, and the use of a can of compressed air can take care of a bulk of the work for you. Cleaning your system on a somewhat regular basis can easily help extend the life of components, increase system stability, and reduce noise. This Tech Tip will take a look at a few areas to focus on, and all you really need to do is open your case and pull the trigger! Case Fans A well-designed computer case (http://www.geeks.com/products_sc.asp?cat=103) will have at least two (sometimes many more) case fans (http://www.geeks.com/products_sc.asp?cat=114) in order to exchange air with the room in order to cool the internal components. With the typical home computer being installed in, well, the typical home, it is reasonable to expect things like dust, hair, pet fur, and so on to be drawn into these fans. The blades of the fan, as well as the walls of the fans frame can grab hold of this debris which creates a thin film that can eventually grow in thickness. As it does, the cooling performance of the fan will decrease and more than likely the noise produced by the fan will increase. In addition, as the fan motor has to work harder to overcome the extra load and resistance created by the debris, the life of the fan can be expected to be cut short. A healthy blast of canned air will knock a good deal of this dust and debris away, and if the fans are running while the blast is administered, they will hopefully eject all the dust out of the case. If not, it should settle to the bottom of the case, and a cloth can be used to wipe it clean. In addition to gunking up the fans, dust can also cover the fan grills (http://www.geeks.com/details.asp?invtid=FGM-80-BLU&cat=368), or other types of guards,
intended to protect fingers from the spinning blades. Keeping these clear will allow the maximum airflow for efficiently cooling the components, as well as cutting down on noise created by the air trying to flow past a restricted opening. Some case manufacturers now include removable filters in front of their case fans in order to make maintenance easier. These filters can then be removed and blown clean, while the fans and case internals remain relatively dust free. For those without such a thoughtful feature included in their case, fan filters (http://www.geeks.com/details.asp?invtid=FFM-80BLK&cat=368) are available in standard sizes to be added to just about any fan. Heat Sinks Heat sinks are necessary for cooling the heat-generating chips inside your computer, and keeping them clean will help them keep your machine running smoothly. Whether were talking about a CPU heat sink, (http://www.geeks.com/details.asp?invtid=AMD242FAN&cat=369), or something like a VGA heat sink (http://www.geeks.com/details.asp?invtid=CF201NBL&cat=371), dust and debris can not only cling to the blades/walls of the fan, but can also become trapped between the narrow fins of the heat sink body. As with case fans, a dirty heat sink fan will suffer a drop in cooling efficiency, create more noise, and perhaps have its life shortened. The heat sink body, generally constructed of aluminum or copper, is the means by which the heat from the chip is transferred to the air. A layer of dust will act as a blanket and insulate the heat sink, thus preventing it from freely exchanging heat with the air. Keyboard Keyboards (http://www.geeks.com/products_sc.asp?cat=553) seem to suffer most when it comes to accumulating the debris of every day usage. Not only do they gather dust and hair like most of the other components discussed, but they seem to be magnets for crumbs of food, cigarette ashes, and just about anything else that can slip down between the cracks. Eventually a keyboard may look too gross to actually want to use, and you may even find that the key action is less responsive or even blocked by items under the keys. A sweeping blast of canned air will work wonders to eliminate the debris, and for best results hold the keyboard upside down while doing so. It might not hurt to give the keyboard a good shake while it is upside down, but be prepared as you never know what might fall out. Mice Optical mice (http://www.geeks.com/products_sc.asp?cat=565) may be more immune from dust than the old roller ball mice (http://www.geeks.com/details.asp?invtid=GN-
115&cat=561), but both styles are still prone to diminished performance caused by dust. Roller ball mice require fairly frequent cleanings in the socket around the ball, as it can sweep just about anything you roll over up into its mechanism. Optical (and laser) mice have a smooth bottom surface that may not have anywhere for dust to gather, but there are still places for it to settle elsewhere. The buttons on either type of mice are generally not sealed, and junk can get into the small cracks around the edges, potentially interfering with the click action of the device. In addition, the area around scroll wheels can easily become gunked up with dust and debris, which a blast of canned air can alleviate. Power Supply Power supplies (http://www.geeks.com/products.asp?cat=39) are much like heat sinks with respect to keeping them clean. The housing of a power supply features a fan (or two) used to cool aluminum heats inks found inside, and the same issues that impacted the performance of a chips heat sink and fan will be found in a power supply. Overheating power supplies can be a major cause of system instability and failure, but it seems like they receive the least attention when it comes to preventive system maintenance. A good blast of air through each of the fan openings and vents on the side can help keep these critical components operating well. The components of a power supply run hot due to the resistance in the process of converting the 120V AC power to the various DC voltages needed inside the computer. Power supplies with better efficiencies are now available which reduce the heat generated, but keeping the fans and heatsinks free of dust will help keep them doing so for a much longer time. Laptops Laptop computers (http://www.geeks.com/products.asp?cat=35) can benefit equally from a cleansing blast of air. For example, the integrated keyboard and pointing device can get the same crud behind them as a desktop version and inverting the laptop and giving a blast can set this debris free. The processor in a laptop computer may not be as readily accessible as in a desktop, but there are vents in the housing that lead to it. One set of vents allow a cooling fan to draw air in, and another set of vents allow the heated air from the processor to be expelled. Giving these vents a puff of canned air will help ensure that the pathway doesnt become restricted and that the processors heat sink doesnt become insulated by a layer of dust. Final Words A can of air wont take care of all of your computer maintenance needs, but one can really make keeping key components functioning at their best a breeze. Available at most consumer electronics and office supply stores, the (low) price of a can of air is well worth it to help maintain the large investment you have in your computer.
Tech Tip 39 DVD Writing: 6 Tips for that Perfect Burn Article by: Miguel Fernandez For many, DVD writing (or burning, as it is commonly called) can end up being an expensive trial and error process, especially when burning video for playback on set-top DVD players. This Tech Tip is intended to eliminate the need to spend that hard earned money in order to achieve that perfect burn. Please note, as with many things computer-related, many people hold strong opinions of certain subjects, and this is definitely one of them. This tip simply enumerates what we have found to work for many people. While this Tech Tip deals largely with DVD video playback compatibility, we at Geeks.com (http://www.geeks.com) encourage you to use your DVD burner responsibly and to observe all copyright laws for the area in which you live. Tip #1 It all starts with media The bane of the DVD burning enthusiast is coasters. This is what bad DVD burns are commonly referred to because this is about the only use left for a DVD disc that suffered a bad burn. With high quality media, you are apt to get less coasters per pack of media. The consensus of many is that one of the best brands of media available are the discs made by a company called Taiyo Yunden. Many also have success with major brand media such as Memorex, Maxell, TDK, Verbatim, etc. They might cost slightly more than standard or no name media, but if you are seriously after the highest write quality and playback performance, it pays to at least consider purchasing them. Not only will higher quality media burn better initially, but they will better retain their burn down the road. A note about Taiyo Yunden is that it is frequently blank (that is, it is not branded on the non-writing side as Taiyo Yunden).
Most manufacturers, major brand and no name alike, usually do not actually make their own media, but they contract out to third party manufacturers to make their media and print their brand name on it. This may actually result in your having two spindles of media with the same name printed on them and actually have them made by two completely different companies. Because of this, many prefer to buy their media based on the actual manufacturer of the media (by a method known as Media ID). Two recommended ways of checking the media ID, or manufacturer ID, (http://www.digitalfaq.com/media/dvdmedia.htm) of your media is to use Neros (http://ww2.nero.com/enu/index.html) InfoTool or the excellent third-party tool, DVDInfoPro (http://www.dvdinfopro.com/). Something else to consider, is that many have found that DVD burning drives are quite finicky when it comes to media, particularly cheap, no-name media. What this means in practical terms is that the more inexpensive media may not work in their drive or may burn at a reduced speed. For example, you may find that a spindle of 16x DVD discs that you bought may only write at 4x or 8x on certain 16x drives. Sticking to higher quality brands helps you to achieving the best possible write speeds while maintaining the highest level of burn quality. Tip #2 Check your drive manufacturers website for firmware updates Many users, especially those who may be more technically challenged, may neglect to upgrade their current DVD drive with newer firmware. Firmware is special instructions written onto a builtin chip on the drive that tells it how fast to burn, how to work with certain media, etc. The nice thing about firmware is that it can be upgraded to newer versions to enhance the features of the drive. Manufacturers commonly release new firmware for many reasons, such as media compatibility or better/faster drive performance. As an example, BenQ (http://www.benq.com/) has released newer firmware for their DW1620 drive (http://www.benq.us/ServiceAndSupport/Drivers/drivers.cfm?product=647) to improve the compatibility with certain 16x media (among other things). For newer firmware for your drive, check the manufacturers website. Firmware is generally very easy to apply and can improve your drives overall performance. Note that if you purchased a system with a DVD burner already installed, it may be an OEM version, and firmware may be difficult to find for this type of drive. There is one caveat on updating your firmware as well: be sure to use the firmware designed for your drive because if you use the wrong firmware, you can kill the drive and invalidate your warranty.
Tip #3 If burning video, be sure the format you use is supported by your standalone DVD player
Not all standalone players support both DVD-R/RW and DVD+R/RW formats. Check your players manual to see exactly what formats it supports before you spend money on media. Generally, older players have more readability issues then newer players. Videohelp.com (http://www.videohelp.com/) is an excellent website for finding such information as your players model number, what media to use, what formats your player can use. You will also find many great guides and excellent software. One word of caution: no matter the player, for video you want to generally avoid using a re-writable (DVD-RW/DVD+RW) media. This type of media is best suited for data (although many standalone players do support this kind of media, some people have still encountered video playback issues). Tip #4 Nero is your friend When it comes to DVD media, the consensus is that Taiyo Yunden is the best media to use. Just like media, the consensus of many is that Neros (formerly Ahead Software) Nero Burning ROM is the best burning software you can currently use for your DVD burner. The good news is that the standard version of Nero is packaged with many burners and is usually fine for most burning needs. Nero also offers a more advanced version called Nero 6 Ultra Edition with features such as video authoring for users who want to get the most out of their burner (http://ww2.nero.com/enu/Nero_6_Reloaded.html). Because of Neros popularity, many third party software packages automatically tie into Neros burning engine for making actual burns on the DVD drive. Other companies also offer excellent software packages as well (such as NTI (http://www.ntius.com/default.asp) and Roxio (http://www.roxio.com/en/index.jhtml;jsessionid=KG3W445VVHRVHLAQAQHCM 4Q?_requestid=5373869)), but Nero has quickly taken the lead in this area in the last couple of years.
Tip #5 Advanced tip: Bitsetting This advanced tip is for those who may be having video playback problems with DVD+R/RW media playing properly on a DVD standalone player. When a DVD standalone player plays back a disc, it looks at a set of low level information to tell it what kind of disc it is (for example: DVD-ROM; DVD+R, etc.). Some older standalone players will only play back discs marked in this area as DVD-ROM. They may physically have no problems playing back a burned disc with video, but their firmware instructions are telling them not to (because it is not marked as a DVD-ROM in this area). There are two work-arounds for this type of issue. The first is to actually update your standalone players firmware (see the videohelp.com website we mentioned earlier for information on how to do this and if an update is available). The second is to have the burned disc report that it is a DVD-ROM rather than a DVD+R/RW disc. You can do this with a nifty feature called bitsetting http://www.dvdplusrw.org/Article.asp?mid=0&sid=2&aid=42). Many drive manufacturers do support bitsetting or booktype change on +R/RW media. The method of changing the booktype of your media depends on the manufacturer of the drive. Some offer a utility to manually change it, while others will automatically change the booktype to DVD-ROM prior to actually burning your files onto the disc when using the proper burning software. While this tip may be more advanced than what you may be looking for, it does help solve some of those nagging compatibility issues some may still face. Tip #6 Advanced tip: When all else fails, burn your DVD in Nero using UDF 1.02 This is a tip some have found to help with many playback issues. If the video DVD you burned does not play, or you get a "Disc Error", try burning the disc using UDF (Universal Disc Format) (http://www.videohelp.com/glossary?U) 1.02. If you have a somewhat older standalone DVD player, there may be a chance that it cannot properly support the newer versions of UDF (burning software, such as Nero usually uses the newer version of UDF when burning DVD video). This last resort tip has helped many with seemingly insurmountable DVD video playability issues.
Final Words Following these tips, and taking the initiative to do some research on the subject, will ensure that youll be quite satisfied with both your DVD writer as well as anything you create with it. While we have found that some people may be quick to blame the DVD drive or the DVD media as the source of a problem, this may not necessarily always be the case as weve seen with tips 5 and 6. It is our hope that these Tech Tips will help you make that perfect burn.
Tech Tip 40 - Dual Display Desktop The ability to run multiple monitors off of one computer is nothing new, but such configurations are seeing a surge in popularity. The falling prices of LCD monitors, coupled with the desire to comfortably have as much on screen as possible, are leading this surge. It might be an even more popular upgrade if people were aware of the ease of installation and the relatively reasonable costs associated with it. To that end, this Tech Tip will take a look at some of the basic requirements and features associated with setting up a dual display desktop on a personal computer. Software Displaying your desktop on multiple monitors is natively supported by Windows XP, 2000, ME, and 98, as well as in the popular distributions of Linux. Although this Tech Tip will focus on configuring a dual display setup in Windows, it is possible to go much higher than two monitors if your needs and budget should allow. With the proper hardware installed (to be covered in the next section), enabling dual displays is quite easy. Simply navigate to the Settings tab of the Display Properties screen in Windows, and where most people are used to seeing controls for one monitor, you will now see two. The two monitors can then be enabled (attached) / disabled, resized, and reoriented to match the configuration that they physically occupy on your desk. By selecting to Extend my Windows desktop onto this monitor, the cursor will now be able to leave the primary monitor and can freely navigate the second display as if it was all one surface. You
can move programs, icons, taskbars, and wallpapers onto the secondary monitor and start taking advantage of the increased desktop real estate. With this setup, a computer becomes much more convenient to use.
The typical home user may appreciate the extra space in order to spread out documents for easy reviewing without having to tab back and forth. On a more recreational level, perhaps they will utilize one monitor for their web browser, while the second one is used to display e-mail, instant messaging, MP3 playback, DVD video, and so on. Another benefit of dual displays in the home can be experienced in 3D games. Many games are now supporting multiple monitors in order to enhance the experience. Unreal Tournament, Quake, and Microsofts Flight Simulator are just a few of the series of games that support multiple monitors to allow the player to further immerse themselves in the action. In business settings, dual displays may be even more valuable. In addition to being able to view multiple documents at once, some may just need more space to see what they are working on. Designers using AutoCAD can now drag all of their toolbars onto the second monitor and use the entire surface of the primary monitor as an uninterrupted workspace. Another example of the benefits of dual displays can be seen with day traders, who may need to be monitoring the activity of numerous stocks at once. Having one window hidden behind another may be not only be inconvenient, but costly, and multiple monitors might be an easy upgrade to justify when money is on the line. Hardware The software portion of the setup is easily addressed considering that the functionality is built into just about every operating system available. The hardware required for running dual displays requires a bit more consideration, but isnt anything that even a novice computer user cant figure out. One thing you obviously need to have is a pair of monitors. The second thing you need to have is a means for connecting these two monitors to the computer, which can be accomplished in a variety of ways.
For those building a system from scratch, perhaps the easiest way to connect two monitors is via a dual-head graphics adaptor, such as this nVidia GeForce 6600 PCIe card (http://www.geeks.com/details.asp?invtid=PCX-PC6600128MTV&cat=VCD). The connectors on this card allow for either one digital and one analog, or for two analog monitors (using the included adaptor) to be connected to the system through the use of just one PCI Express x16 slot. There are also dual-head cards available for AGP and PCI; it is simply a matter of selecting the correct card for the slots available on your motherboard. In addition to selecting the correct motherboard interface, it is important to select the correct display connections. The card referenced in the previous paragraph provides one DVI connection for a digital display, as well as one 15pin VGA connection for an analog display. Through the use of an included DVI to VGA adaptor, owners can then run the combinations of monitors mentioned above. Other cards may offer two DVI connections (http://www.geeks.com/details.asp?invtid=01-000080XXX-DT&cat=VCD) or two VGA connections. A VGA connection can be identified by the typical 15-pin (generally blue in color) plug that has been the staple on computers for years. A DVI connection is generally white in color, and is slightly wider than a VGA connection. Whether selecting a card for use with existing monitors, or buying the card and monitors all at one time, it is obviously critical to select components that will work together. For those with an existing system that could benefit from dual displays, replacing the existing graphics adaptor with a dual-head card is an option, but it is not the only one. Another graphics adaptor can be added to the system, and the existing card can be kept. This is nice for financial reasons, or if the performance of the existing card doesnt warrant replacement. The key thing to consider with this approach is to select a secondary graphics adaptor that uses a slot available on your motherboard, and that offers a display connection to match your monitor. Newer systems may feature more than one PCIe x16 slots which can make this happen, but you can also add a PCI card to any system currently running PCIe, AGP, or PCI graphics. The cards used in a dual display setup do not need to match, and it is acceptable to run a high end primary card with a bargain basement secondary card, or any combination of cards in between. Trying to set up dual displays on systems with integrated video can result with mixed results. Expansion slots are generally available for graphics cards on systems with the video adaptor built into the motherboard, but using these slots on many systems like this instantly disables the onboard video. Those desiring
dual displays on such systems need to investigate whether onboard graphics adaptor can be part of the setup, or if two new connections need to be installed via either method described previously. But, some integrated video solutions will support dual displays, and may do so without additional hardware. For example, if the manufacturer includes the necessary connections, systems that utilize the Intel Extreme2 integrated graphics processor can run dual displays as is. There are also specialty cards, such as the ones made by Matrox (http://www.matrox.com/mga/corp/enterprise/products/highend.cfm) that can allow up to four monitors per card, and multiple cards per system! Though not made for the gamer, these cards are great for stock traders, banks and enterprise server situations. Notebook computer owners arent left out of the loop on dual displays, either. Most modern notebooks feature a VGA connection that can either be used as the primary display or as part of a dual display arrangement with the notebooks integrated display. Not all notebooks allow for this, as some will only mirror the display onto the attached monitor, so it is best to check the features and specifications before making any purchases.
Benefits Financially, two smaller monitors should be much easier to justify than one larger monitor. Two 17 LCD monitors (http://www.geeks.com/products_sc.asp?cat=538), or even two 19 LCD monitors (http://www.geeks.com/products_sc.asp?cat=540), generally cost much less than just one 20 LCD monitor (http://www.geeks.com/products_sc.asp?cat=532). Looking at the Geeks.com inventory, it can be seen that two typical 17 models will cost about $400, while one 20 model will cost closer to $800! The price difference between two new smaller monitors and one new larger monitor is remarkable, but many people might already have something like a decent quality 17 monitor on hand. The value of a dual display desktop gets even better if you only need to buy one of the monitors. Many people retire perfectly good monitors just because they want to upgrade to a larger screen.
Simply adding another, similar monitor to the setup can be much more economical and provide even greater desktop real estate.
Desktop real estate is what this effort is all about. People want larger displays for generally three reasons: (1) to make the image larger and easier on their eyes, (2) to be able to fit more content on to the screen, and (3) just for bragging rights as bigger is better! Dual displays may be a good way to take care of numbers one and two, and it will help satisfy number three by scoring significant coolness points worth bragging about. As a point of reference in the desktop real estate department, lets take a look at the maximum resolution you can run with either a single 20 monitor or two 17 monitors. A 20 Sony LCD monitor (http://www.geeks.com/details.asp?invtid=SDM-S204_B-DT&cat=MON) supports a maximum resolution of 1600x1200. Any one of the 17 or 19 LCD monitors at the links above will provide a maximum resolution of 1280x1024. Place two of these monitors side-by-side in a dual desktop setup and you have an effective resolution of 2560x1024. If your physical desktop makes it more convenient to configure your Windows desktop so that one monitor is above the other, instead of side-by-side, you could then have an effective resolution of 1280x2048. As you can see, the total area in the dual display configuration is far greater than that found on just one 20 monitor. From an aesthetic stand point, people may like to have two of the exact same monitors on their desk. It is not necessary that the monitors in a dual display setup match in terms of size, brand, or technology (LCD or CRT). Any two monitors can work in a dual display setup as long as the connections on the monitor and the graphic adaptor match up. That said, there may be other reasons why someone would want to have similar, if
not identical, monitors in their array. The display specifications are worth considering when adding a different type of monitor in order to create a dual desktop arrangement. Factors such as contrast, brightness, resolution, refresh rate, and dot pitch are just some of the variables that can make one monitor look different than others. In general, it is not a big deal for displays to look different when they are in different locations, but when you have them side-by-side on your desk it may be more of an issue. If the image quality isnt similar, shifting your eyes back and forth between the two monitors can become a strain as your eyes try to adjust to each. Many quality LCD displays have specifications that overlap and should be comfortable on the eyes, but a nice crisp LCD next to a slightly worn CRT is a different story. Final Words Dual display configurations are not difficult to setup and offer an economical alternative to upgrading to a larger monitor. The convenience of such a monitor arrangement can be reaped in both business and personal applications, and once you experience working or playing on two displays, you may wonder why you didnt do it sooner.
Tech Tip 41 Voice Over Internet Protocol - VoIP 101 At this point, most people have probably heard of VoIP, and many may have used it, but they may not fully understand the basics of this rapidly expanding technology. This Tech Tip will take a look at some of the basic features, modes of operation, and other background information on one of the latest ways technology can be used to connect people. The acronym VoIP stands for Voice over Internet Protocol, and the basic concept of the term can be fairly well understood by just looking at the words that make up the acronym. For a more complete definition, VoIP can be described as a means of converting analog audio signals (your voice) into digital data that can be transferred over the Internet.
Getting Setup When many people think of VoIP, they instantly think of services like Vonage (http://www.vonage.com/) or Packet8 (http://www.packet8.net/) that are in business to become your full feature telephony (http://www.webopedia.com/TERM/t/telephony.html) solution. In fact, these companies usually use terms such as Broadband Phone Service to refer to their full range of products. Although these companies do offer VoIP services, the term can be used to describe something much simpler than a full-fledged telephony service. Taking a look at the basic definition again, we can see that we only need to be able to capture the audio and transmit it digitally in order to have VoIP. This can be done without subscribing to a service and without a specialized telephone or other equipment. Basic VoIP can be accomplished by an internet connected PC with a soundcard sporting a microphone and speakers. Keeping that in mind, lets look at the three basic ways people implement VoIP.
The first way to implement VoIP is the PC to PC version of VoIP as described in the previous paragraph. With a fairly typical computer connected to broadband internet, and some kind of software for managing the communications, anyone can be up and running with a basic version of VoIP that may be totally free. Such software is available as a free download, and Skype (http://www.skype.com/) is one of the more popular applications in use. Skype allows members to make free PC to PC calls regardless of distance and, for an extra fee, they can send/receive calls from standard telephones. As mentioned, you only need a PC with a soundcard (http://www.geeks.com/products.asp?cat=SND), a microphone (http://www.geeks.com/products_sc.asp?cat=785) and a decent set of speakers (http://www.geeks.com/products.asp?cat=SPK), but there are also specialized USB VoIP telephones (http://www.geeks.com/details.asp?invtid=SE-P1K&cpc=SCH&srm=0) that make it even more convenient. Using a USB VoIP phone not only makes the communication seem a bit more traditional, but it also frees up the soundcard for typical audio applications (MP3s, games, etc), while the phones circuitry handles all audio processing for phone calls. The second way is by using an ATA, or Analog Telephone Adaptor (such as this one from Cisco (http://www.cisco.com/en/US/products/hw/gatecont/ps514/products_d ata_sheet09186a00800c4139.html), which may be the most common form of VoIP in use today. With an ATA, a standard telephone can be plugged into the adaptor just as you would plug it into a phone jack in the wall. The ATA is then connected to your network, or directly to your broadband internet gateway, in order to convert the analog audio into digital data for transmission over the internet. Vonage (http://www.vonage.com/) and other similar services use ATAs to implement VoIP, as it is a simple approach for people with existing phone equipment that they would like to continue using. In addition, it can allow for a home pre-wired for multiple phone jacks to continue operating as is, with the only new piece of hardware required being the ATA.
The third way to implement VoIP is via IP phones. An IP Phone may appear to be much like your standard telephone, with the only physical difference being that the (RJ-11) phone jack has been replaced by an (RJ-45) Ethernet connector. Internally, there will be some differences in the circuitry in order to allow the conversion from analog to digital to happen right in the
phone. An IP phone is then connected directly to your network or broadband internet gateway, with no adaptor required. Packet8 (http://www.packet8.net/) is one service that offers IP phones to their customers, in addition to the more typical ATA VoIP service. The downside to IP phones is that the implementation requires all new telephones designed solely for use with VoIP. Any existing analog equipment can not be used. Data Transmissions Your standard phone line uses the PSTN (Public Switched Telephone Network also sometimes called POTS (http://www.webopedia.com/TERM/P/POTS.html) (for Plain Old Telephone Service)) for connecting the parties involved in a phone call. Although this system is reliable, it is not very efficient, and considering it has been operating under the same basic principles since the invention of the telephone, it might be surprising to realize we have such an antiquated system. A call made on this system is referred to as circuit switched (http://www.webopedia.com/TERM/C/circuit_switching.html) , since the two parties are constantly connected throughout the duration of the call like a circuit. A VoIP call doesnt use the PSTN, and it does not keep the two parties connected throughout the conversation. A VoIP conversation is referred to as packet switched (http://www.webopedia.com/TERM/P/packet_switching.html), as the data is transmitted in packets (or smaller chunks) and the connection is made only as these chunks of data need to be transmitted. One benefit of this method is that packet switching lets the data travel from caller to caller over the most efficient path on the Internet, and not over one dedicated line. Additionally, because there isnt a dedicated connection for the conversation, bandwidth is conserved, and more phone calls can be placed in the space typically required by one PSTN call. Even greater efficiency can be achieved through VoIPs use of data compression, which is equivalent to Zipping-up the data before transmitting (and unzipping it at the other end). VoIP Protocols Just as with most other means of communicating data over the Internet, there are a few VOIP protocols that have been developed by various groups and companies. Some of the current protocols include SIP, IAX, H.323, MEGACO (http://en.wikipedia.org/wiki/Megaco), and MGCP (http://en.wikipedia.org/wiki/MGCP). Lets look at some details of the first three, as they may be the ones you are most likely to encounter.
SIP (http://en.wikipedia.org/wiki/Session_Initiation_Protocol), or Session Initiation Protocol, is the most commonly used VoIP protocol and was developed by the Internet Engineering Task Force (IETF (http://www.ietf.org/)). One issue with SIP is that it is not particularly NAT (Network Address Translation) friendly. NAT is what allows a local area network to manage one set of IP addresses for internal communications, and a second set of IP addresses for external communications. IAX (http://en.wikipedia.org/wiki/IAX), or Inter-Asterisk eXchange, is another VoIP protocol that is used with free Asterisk (http://www.asterisk.org/) software for managing a PBX (Private Branch eXchange). IAX (or more recently IAX2) deals better with NAT than SIP, but its implementation is limited to Asterisk servers only. A PBX is private phone network used within an organization that can connect all internal lines to each other, as well as using a central access point for connecting to any outside line. H.323 (http://www.webopedia.com/TERM/H/H_323.html) was originally developed by the International Telecommunication Union Telecommunication Standardization Sector (ITU-T (http://www.itu.int/ITU-T/)) for use with multimedia conferencing over local area networks (LANs), and was later applied to VoIP applications. This is an older protocol that isnt commonly used. Benefits VoIP offers many benefits over traditional telephone service, which has it poised to become the phone system of the near future. Many traditional long distance carriers actually use VoIP themselves, as it makes routing long distance calls more convenient than over traditional lines. So, even if you dont subscribe to a VoIP service personally, it is likely that you have already used it, whether you know it or not. One of the main benefits of VoIP is the flexibility. You can take your phone, and your same phone number, with you anywhere in the world where a broadband internet connection is available. This can be extremely useful for business travelers who cannot count on their mobile phone to work internationally, and appreciate the presence of a dedicated phone number to use for staying in contact with associates/clients. This flexibility is made easier through the use of a PC-based or IP-based telephone, but even a typical ATA can be packed up and stored in a brief case. Another key benefit is the price. Taking a look at the offerings from services like Vonage (http://www.vonage.com/) or Packet8 (http://www.packet8.net/) shows that the traditional phone company may not be able to compete. In addition to offering local and
long distance for lower rates, they also bundle in all the extra calling features that people have grown to rely on (such as caller ID, call waiting, three-way calling, etc). VoIP also allows some more advanced features not available with your typical land line. Many services offer the ability to check voice mail via the web, or to even have voice messages sent to you as an attachment in an e-mail. The services web interface may also allow for a detailed calling log to be reviewed, for customized messages that can be applied to certain callers, and for special call forwarding settings to be applied. Final Words VoIP is nothing new, but as the technology advances, the popularity is surging. The efficiency, calling features, and competitive pricing have it poised to overtake the antiquated PSTN system as the way we make phone calls. This Tech Tip covered some of the basic ideas and features of VoIP, and hopefully offers a better understanding of the technology and features of Voice over Internet Protocol.
Tech Tip 42 - Computer Interconnections (Intro and Electrical Safety) Article by Roy Davis The catchword these days is wireless. Wireless Internet access, cordless phones, even a wireless mouse to get the clutter of the mouse cable off your desk. My day job for the past 30 years is working as an electrical engineer designing all these wonderful wireless gadgets. My moonlight job for 25 years has been conducting seminars and writing for publication to help people use their computers effectively. If it is the wireless age, Ill tell you what, Ive never seen so many wires. Ive got it pretty much under control on top of my computer desk with the cables tucked away, but underneath its a nightmare of wires, cables, and power cords. This series of Tech Tips will tackle the tangle of wires around your computer and consumer electronics gadgets as well as help you understand how those wires work and how to make your interconnections work better. Talking about interconnections may not be as jazzy as arguing about the latest high-performance processor chips from Intel and AMD, but where do 90% of computer hardware problems come from? The interconnects, thats where! Getting the right plug in the right socket is not a trivial task when there are so many with a seemingly endless assortment of sizes and shapes that seem to come loose at just the wrong moment, and thats only on the outside of the computer! Open up the case of a well-equipped PC and you are greeted with another impenetrable jungle of interconnect cables and wires. Lets spend some time sorting out this mess so you can use logic to solve your next interconnect problem instead of just plugging into random holes with the hope that you find the one that works before the one that burns it up.
OK, we are going to talk about how you can keep wires and cables from causing hum and noise in your computer speakers, data loss due to intermittent connections and lots of other topics. But, lets take that last thought about before it burns up and talk about computer electrical safety first. Naturally, I did some research by poking around on the Web for electrical safety as related to personal computers. I was shocked (sorry, couldnt help it) by the lack of practical information about handling electricity in and around your computer, given that all computers run on electricity. Your laptop runs on electricity and there are special considerations there too, but there are things to be careful of even when running off the battery. Lets start with the simple safety issues. How about that power cord? Theres one for your computer, one for the monitor, one for the printer and so on. Power cords put up with a lot, but they do fail. You should always pull wires and cables out by grasping the plug, not the wire. You can severely damage the interconnect that way. I inherited a piece of equipment when I bought this house and it had apparently been pinched at one time. The safety ground wire was broken off and shorted against the hot wire inside the cord. There are safety circuits, which Ill explain later, that tripped, but the previous homeowner got the equipment working by disabling the safeties. The point is, people do things to electrical equipment that result in hazards, and I could have been killed by touching the cabinet with the power on if I had not replaced the cord before plugging the equipment back in. Your computer has exactly the same type of three-wire cord and a metal cabinet. The house wiring in the room where your computer is probably doesnt have all those safeties so its ripe for an electrical shock if the cord is not in good shape. Enough of the horror stories, lets get down to the technical details. The wiring in your house, and the power cord that leads to your computer uses three wires. These wires are required by contemporary electrical wiring code to be colored black, white and green. The black wire is the hot one that is the most dangerous to touch. The white wire is the common, or return, that is connected to the electrical ground back at the main power box in your house or apartment. The electrical power flows back and forth in the black and white wires. Some equipment gets by with connecting only the black and white wires and depends on completely insulating all the electrical components inside. Most computers, even the latest laptops, use all three wires where the green wire adds a safety
ground. This safety ground is connected to the metal parts of your computer case. Under normal conditions, no current should flow in the green safety ground. The green wire is in fact connected to the white wire at the electrical ground back at the power box. So why is the safety ground so important? If the wiring in your house is done correctly, and you have a computer where you drop a screw that shorts the hot side of the power line to the case, the electricity would take the path of least resistance and travel back through the safety ground wire instead of through your body. The safety ground can handle the current so it should kick out the breaker and shut down the circuit, keeping the electrical short from turning into an electrical fire. Now you know why its so important to have your power cords in good shape, but what should you look for? First, you cant do a good job of inspecting a power cord under your desk. Take the cord to a place with good light and look at it closely along every inch. Especially near the plugs, bend the cord back and forth and look for cracks in the insulation. No cracks are acceptable. If the cord appears pinched, stretched or cracked, throw it in the trash and get a new one. A new computer cord can be had for a very few dollars at your local computer store or electronics supply. When I see them on sale, I pick up a few spares to have around so I dont hesitate to toss a defective one. I mentioned some safeties built into your house. The first line of defense is the circuit breaker. Every circuit in your house must have breakers as they are required by code. Circuit breakers sense when too much current is being drawn and shut off the power. The only purpose of circuit breakers is to keep the wiring from going up in smoke with a short circuit. They dont protect you from shocks. A second set of devices is installed in certain areas of your house like bathrooms, the kitchen, the garage and places where damp floors are possible. This gadget goes by the acronym of GFCI, or Ground Fault Circuit Interrupter. This is your friend and the best protection for shocks from faulty electronics equipment. It senses an imbalance of currents in the black and white wires, which means the current is flowing into the green wire or worse- into your body. As I sit here at my computer desk Im surrounded by electronics goodies, some of which I hold onto and have prolonged contact with for hours, such as a VoIP (Voice over Internet Protocol) headset. These gadgets are all connected back to the power line one way or another. The metal case to my computer is grounded through the safety ground. So, if I should have a faulty device that I am touching, and then lean on my computer case to pop a new DVD in, I could be
electrocuted. Thats why I prefer to have a GFCI on the circuit in my computer room. You should have an electrician install one for you; its cheap protection. If you are handy, here is a Web site with GFCI Info. While you are at the local hardware store, pick up an electrical circuit tester. Its a big yellow or orange plug with three lights. When you plug it into a receptacle to test it, the pattern of lights will tell you if the wires in the wall are hooked up correctly. Every house Ive owned has had improperly wired receptacles and this is really dangerous. The tester also has a button that can trip the GFCI to test it to make sure it really works. Keep the above in mind while using a laptop computer if it is plugged into the wall. If you are going to sit out on the concrete patio, by the pool or anywhere that is damp or has concrete, abandon the power supply and run off the battery. Sitting in a pair of shorts on an aluminum chair or even just with bare feet on the concrete and a computer in your lap connected to the power lines is an invitation for trouble. The battery is going to die of old age if you keep your power supply plugged in all the time, so exercise the battery while you enjoy the sun. Final Words This Tech Tip covered Computer Interconnections and Electrical Safety. By understanding how and why we use power cords with grounds, and even GFCI, well have a better chance of surviving our dealings with computers. It might be a wireless world, but weve never had to deal with so many wires between the gadgets we love. Although making these interconnections work is a challenge, we might as well learn as much as we can about cables and connectors so that we can confidently plug in that new computer toy and get it working on the first try.
Tech Tip 43 - Computer Interconnections (USB: The Answer to Our Serial Port Prayers) Article by Roy Davis Last week, we talked about how to get power to your computer without getting a jolt yourself. This time, lets go into the latest in connecting gadgets to your computer, the Universal Serial Bus, or USB. It has made computer interconnect so simple that its easy to forget what a nightmare serial ports used to be. When I say serial port, oldsters among us will flash back to the big, clunky 25-pin connector for the cable that went to an external telephone modem. Since early modems could send or receive only one bit at a time, the data had to be spoon fed to them on a single wire, one for sending and one for receiving. Add a common wire and you only need three wires for the data. Then why did they design the serial port with 25 pins? Back in the even earlier days, the serial connection was used for mechanical teletypes and printers that had separate control signals for announcing when the machine was ready to send and ready to receive the next character and even a pin for when the telephone rang. When PCs became popular, the serial port was pared down to nine pins by throwing away most of those unused control signals. Apple went one better by reducing the pin count to eight and using a much smaller circular DIN connector. (DIN is short for Deutsches Insitut fr Normung eV, the standards-setting organization for Germany. A DIN connector is a connector that conforms to one of the many standards defined by DIN). Over the years, the serial port grew to include many other uses other than telephone modems. It was used to connect early printers, scanners, and security devices (dongles). The lowly serial port was even used with a crossover cable for the first poor-mans
network where two computers are connected to transfer files. OK, thats the history of the traditional serial port, but what most people miss is that there are all sorts of other serial ports on your computer. The PS2 keyboard connection is really a serial port, as is the PS2 mouse. Your network cable is really just a very highspeed serial port. There are other ports that could be serial ports, if the data rate was just fast enough. An example is the printer port that sends eight bits at a time over eight separate wires. That makes for fatter cables and bigger connectors. How about your joystick or other game controller? There is only one joystick port on a PC and its a pretty crude design. With all these different, incompatible peripheral interfaces, no wonder its a nightmare of tangled wires behind my computer. Finally, the designers of computer interconnects have come up with a modern solution the Universal Serial Bus or USB. Inspired by the Apple Desktop Bus (ADB) found on Macintosh computers, which could support multiple and varied devices, USB started out to be better than the 115 Kilobit per second old style serial port by upping the data rate to 1.5 Mbps, but thats now known officially as Low Speed. With this data rate, USB keyboards, mice and trackballs are easily handled. Then came Full Speed at 12 Megabits per second and USB 1.1 was suddenly the interconnect of choice for printers, scanners and even external computer sound systems. Recently, USB 2.0 has been introduced and now we have a High Speed of up to 480 Mbps, though most devices cant keep up with that blistering speed. This makes USB the method of choice for connecting high-speed Flash memory devices, external hard drives, DVD/CD readers and writers, external professional grade video and audio interfaces and your network connections. If my computer had to exist with only two connectors on it, my choice would be power (because you have to) and USB. For older peripherals you can buy inexpensive adapters that have a connector for your legacy device and a USB plug to go into your computer. Until USB came along, the connectors on the back of your computer were either big clunky (but rugged) throwbacks to an earlier age, or tiny (and flimsy) round ones that are difficult to get aligned and plugged in. USB connectors are small, but built to take some abuse, and easy to figure out which way to line up with the socket, even when feeling with your fingers behind your computer. The Type A USB connector is what you find on your computer and on the output side of USB hubs. Its flat with four pins that are protected inside a metal shield. Being flat, you can only try sticking it in the jack two ways, and
the wrong way wont even try to go in. This is known as being keyed. The other common type of USB connector is the Type B. It looks very different from the Type A and is generally used only for the input side of hubs. This way, you cant mix up the uplink and downlink cables. There are miniature USB connectors, but they are generally specific to very portable devices like digital cameras and MP3 players. The miniature USB connectors are always a standard Type A connections. I mentioned USB hubs. Previous computer interconnects were one-to-one, so you needed one port for every device you wanted to plug into your computer. The B in USB means bus in the true sense of the word: it can carry several passengers at once. By using a hub, several devices can be plugged into one USB port on your computer. Hubs can be built into almost any thing. You can buy a USB keyboard that has a hub in it. That way, you can have one cable running down to the computer and plug your mouse or trackball (or both!) right into your keyboard. Yes, you can have multiple pointing devices and use whichever feels right at the time. Another innovation of USB is that it doesnt just carry data it carries power, too. The PS2 mouse and keyboard ports did supply power for those devices, but USB has enough juice behind it to run lots of other things. For instance, I just bought a wireless network adapter from Geeks.com, and it connects to the computer using USB. That means only one cable for both data and power, which is a lot better than having a separate power brick to find a socket for. http://www.geeks.com/details.asp?invtid=PQPWU221L&cpc=SCH&srm=0 Peripheral devices used to have to be plugged into their special port on your computer. They had to be plugged in before powering up your machine. Plugging or unplugging while the power was on was a good way to crash your computer or even damage the hardware. Of course, you had to make sure you had installed the proper driver software for the device, and you had to match the driver with the device and the right version of Windows. If you bought an older device, it might not even have a driver that would work with the latest Windows. USB replaces these headaches with automatic device installation when you plug it into the computer. You can even plug it in after booting up. Just dont forget to stop the device using the applet in the system tray before removing it with the power on. Normally, USB power is limited to 100 milliamps, or one tenth of an Amp when you plug a device in. Thats enough to power most gadgets, but if you have a power-hungry unit it can negotiate a boost up to 500 milliamps from most computer USB ports. Now, if you are using a hub, the computer isnt going to want to supply the boosted power to many devices. To overcome this obstacle, you can buy hubs that have their own power
brick. The unpowered hubs are fine to tote along with your laptop computer to hook up low-powered devices, but for your desktop machine, you should get a hub with its own power supply. Cell phones and MP3 players are now coming with USB cables for charging their batteries. They dont need a power brick of their own; they just steal power from your computer. On a trip, this is great because you can charge your portable devices from your laptop and carry only the charger for your computer. My main home computer has three pairs of USB ports on the back and one on the front. I know that some are USB 2.0 and the other USB 1.1 So how do you tell if you computer is equipped with USB 1.1 or 2.0 ports? Often you have both, so check all of them to be sure where you can plug in your low speed devices, like a mouse, or a highspeed device like a new flash memory stick. First, to make sure you arent reading any external USB hubs you have, unplug them from your computer and reboot. Start at the Control Panel and select the System applet. Pick the Hardware tab and then the Device Manager button. Expand the Universal Serial Bus controllers item at or near the bottom of the list. Scan through the list and look for USB 2.0 and/or Enhanced. The term Enhanced means that it is USB 2.0 even if it doesnt explicitly say 2.0. If you do happen to have both USB 2.0 and 1.1 ports on your computer, and they are not labeled, you might be in for some trial-and-error to get your high-speed devices stuffed in the right hole. It doesnt hurt to have a low-speed device plugged into a highspeed port, but the other way around will only run at the lower speed. When you do plug a USB 2.0 device into a USB 1.1 port, you should get a message balloon on the taskbar that warns you of the mismatch (assuming you are using Windows XP). USB ports are usually in pairs on the back of your computer. Say you have one USB 2.0 pair and one USB 1.1 pair. If you remove all your USB devices and boot up your computer, then plug in your high-speed devices until you get them in the USB 2.0 ports, then you can plug your slow-speed USB 1.1 devices in the other pair of ports. If you have more devices of one type than you have ports on your computer, buy a USB hub to split out your ports. Hubs are cheap and give you lots of expansion capability. Make sure if you have USB 2.0 devices that you get a USB 2.0 compliant hub!
If you are really a hardcore geek, you can read lots more about USB at http://www.beyondlogic.org/usbnutshell/usb-in-a-nutshell.pdf. Final Words USB has taken the computer industry by storm. Every laptop and desktop computer sports several USB ports and you almost cant buy a peripheral these days that doesnt have a USB cable attached. Its so popular because people have struggled for years with many different and incompatible computer interfaces. You can start packing away those 25 to 9-pin serial port adapters and put away those fat parallel printer port cables that have been replaced by USB. Your joystick for games, mouse for drawing and trackball for text editing can all plug into a single hub and from there into a single USB port on your computer. With hot plugging convenience, you can jack in all sorts of new toys and enjoy the benefits of USB interconnectivity.
Tech Tip 44 - Computer Interconnections (Wired for Sound) Article by Roy Davis Early PC sounds were limited to beeps and boops, which made for some pretty boring games, and if you wanted to listen to music, you had a separate cassette deck to play your tapes. Now even laptop computers can play CD music in stereo, games come in surround sound and we do our music recording, organizing and playback on our computers. Here are some tips on how to connect all those pieces of a computer audio system together to get clear and satisfying sound. 1. Basic Sound Card Wiring The basic computer sound system starts with a sound card, either a real pluggable card or one integrated into the motherboard. The traditional sound card has four stereo 3.5 millimeter jacks for inputs and outputs. One jack is a direct microphone input and one can drive a pair of speakers or headphones. Two other jacks are for Line In and Line Out. We ll discuss these two a bit later. You can use the microphone and speaker jacks to hook up a headset for Internet VoIP phone calls or gaming. If gaming is your thing, you can step up to a surround sound card that can drive several speakers. Speaker configurations are known b y a numbering system where 2.0 stands for simple stereo with a left and a right channel speaker. Systems that are 2.1 add a subwoofer for better low frequency response. True surround sound systems start with 5.1 - these have a front left and right, rear left and right (satellites ) and a center channel plus the subwoofer. That s a lot of speaker wires coming out the back of your computer! 2. Powered Speakers Internal sound cards have to run off the 12 Volt computer power supply. This limits the power available to a little over 2 Watts RMS (Root Mean Square) before the onset of overload distortion (we ll get into that later). To get around this limitation, most high quality computer speaker systems b ypass the internal power amplifier on the sound card and incorporate a higher-powered amplifier either in a standalone box or right inside the speaker enclosure. A separate line operated power supply gives the juice to the powered
speakers. Having at least 10 Watts RMS for each speaker makes a lot of difference in the quality of the sound even if you are not blasting rock and roll. Beware of PMPO (Peak Music Power Output) ratings because they are pretty meaningless, whereas RMS is a real measure of the ability of an amplifier to drive a speaker. Powered speakers are driven by an analog signal from one of the 3.5 mm jacks on your sound card. You could plug it into the Speaker/Headphone jack, but the quality would stink. This is where the Line Out jack comes into play. The line level signal bypasses the speaker amplifier on the sound card to give you a cleaner signal. The Line In jack b ypasses the microphone amplifier for higher quality recording, but we ll have to get into that in a future Tech-Tips. 3. Subwoofer I already mentioned subwoofers as the x.1 part of a speaker array. A subwoofer is a special speaker that does a better job of producing the deep bass that makes music sound full and battlestar explosions more real. You can have a simple stereo speaker set, but to get the bass, the speakers on your desk would have to be large. I don t know about you, but my desk doesn t have room for large speakers. Instead, the speakers on your desk can be small to produce just the high frequencies that you want to hear directly. The subwoofer can be larger and placed under your computer desk and you will still hear and feel the bass just fine. Just remember that the speakers have to be wired up so that only low frequencies go to the subwoofer and only high frequencies go to the small speakers. It s best to purchase a complete speaker system with the proper crossover (frequency splitting circuit) and interconnect cabling included. 4. External USB An alternative to the traditional sound card is the USB speaker system that moves all the audio circuitry outside your computer. The only connection to the computer is the digital USB cable. The debugging headaches outlined below apply to sound systems connected with analog signals. With a digital interface, the noisy computer case is completely separated from the sensitive audio circuits. USB connected sound systems are especially good for laptop computers that may have only a headphone sound output that is not up to par for decent sound. 5. Sound System Debugging Since computers were not originally designed to be part of a hi-fi sound system, it s no wonder that we run into many problems like distortion, noise, and hum when we go for top quality audio. Here are some tips for cleaning up those problems so you can have a sweet-sounding music or gaming system. 6. Distortion
Audio signals are prone to all sorts of distortion that can be introduced by everything from a dirty plug to a faulty audio C ODEC (Compression/Decompression algorithm). Since we are talking about computer interconnects here, we ll limit our distortion discussion to only two causes: One type I mentioned is dirty or loose plugs. Make sure your computer audio signals are protected by tight, clean connections. Corroded contacts can do all sorts of nasty things to sound. Clean your plugs with a dry rag, or maybe a tiny bit of contact cleaner that you can buy at Radio Shack. If this doesn t result in nice shiny connectors, replace your cables. They are too cheap to have them degrade your sound. The second type of distortion involves overload, basically turning the sound up so loud that the circuit cannot amplify it properly so the peaks of the signal are clipped off (hence the name clipping distortion or peaking). This creates all sorts of harmonic distortion. If you are recording rock guitar, harmonic distortion is part of your sound. For playing back music on your computer, it sounds horrible. I ll go into how overload distortion interacts with how you have your computer sound system connected up in the next section on noise. 7. Noise All audio systems suffer from noise at some level. It s just natural by-product of amplifying sound signals, but we can minimize it to the point that we don t hear it anymore. By noise, I mean the constant hiss you hear when you crank the volume way up. We ll cover power line hum noise in a moment. I m going to introduce a concept that even many engineers don t seem to grasp, but it s really simple: gain distribution. You want to have the least amount of gain between the source of the sound signal and your ears as you can have. Gain is a measure of amplification and you can have too much of it. Here s a concrete example that you can demonstrate yourself and maybe even improve your computer sound. Suppose you have a set of powered speakers that are fed an analog signal from your sound card. Play some music, say, a CD in the CD drive. Go to the full audio mixer control panel, usually double clicking on the speaker icon at the lower right of your screen. Push the mixer slider for the CD Player all the way to the bottom. Push the Master control all the way to the top and turn the volume control on your powered speakers all the way up. You are probably listening to a lot of hiss noise with no music right now because you have maximum gain going in your powered speaker and your sound card. Move the CD Player slider up very carefully and you will find that you get maximum volume very quickly, but all that hiss is still there.
Now, reverse the process and start with the powered speaker turned all the way down and the mixer controls all the way up. Turn up the powered speaker only to a comfortable level. Back down the mixer controls only as needed to get rid of overload distortion. You can quickly find a setting that gives all the volume you want, low distortion and minimum hiss. This is the ideal gain distribution between the elements of your sound system. 8. Hum Getting rid of hum in your computer sound system can really be frustrating. Hum is caused by our power lines that run at 60 Hertz (which we used to call 60 cycles per second) and it can leak in from any where. Sixty Hz is a pretty low frequency that many speaker systems don t reproduce very well, but often the harmonics (multiples) of the power line frequency (120 Hz, 180 Hz, 240 Hz and so on) are what we hear, and they can be really annoying. The first step to get rid of hum is the optimum gain distribution as described above. Hum gets louder with more amplification, so get the gain down. The next step is to get rid of the source of the hum if you can. One source of hum can be the use of an unshielded cable between the Line Out of the sound card and the input of the powered speaker system. You can get cables with 3.5 mm plugs that are designed to hook up speakers and these cables generally have no shield to protect the signal from hum induced by nearby power cords. Make sure all the cables between your sound card and your speaker system have a shield. Your best bet is to use the cables that came with the speaker system in the positions where the instruction tell you to use them. Don t swap cables around. The unshielded speaker cable will work fine between the amplifier and a speaker, but not on the input of the speaker amplifier. Another culprit can be the power supply in your powered speakers. Leave the volume control where you usually listen when you like it loud and unplug the power supply. Now unplug the sound signal cable (the 3.5 mm plug), and make sure the plug isn t touching anything. It s not dangerous, but it will pick up hum if it is in contact with something. Plug the power supply back in. If you still have hum, it s coming from the power supply in the powered speaker, so you might think about getting it repaired or upgraded. Still another cause of hum is a ground loop. When you have the power cords of two pieces of equipment plugged into separate receptacles, there could be a long path between the two receptacles. Also, the position of the audio cables relative to the power cords can have your sound system humming. One way to avoid ground loops between equipment is to connect the power cords of the two pieces of equipment with a Y-Cable that ensures they are plugged into the same branch circuit. If the Y-Cable still has your computer humming a few bars, try repositioning the audio cables. Try to keep the audio cables, including speaker wires, bundled together with a cable organizer kit. Keep some separation between the audio cables and the power cords, but sometimes too much separation contributes to a ground loop. It really ends up being a trial-and-error solution. Final Words Even if you live with an MP3 player on your hip, chances are you recorded that music on your PC. Making PC sound systems sound good can involve a little wiring or a bunch.
Of course, having the right equipment in the first place is imperative. Understanding the causes of sound system woes can go a long way toward fixing them. Change one thing at a time and use the trouble shooting tips above to make sure you are getting the best sound possible out of your gear.
Tech Tip 45 - Computer Interconnections IV (Picture Perfect) Article by Roy Davis As a major geek, I probably spend more time looking at the back of a computer than I do the front of a television. Even so, having the gadget freak side of a geek in me too, I just had to buy the family one of those new flat panel TVs. If you think the back of your computer has a confusing assortment of connectors on it, look at the dark side of a recent model television and be prepared to be confused. This article will sort out this confusion, and in the process, well note that more and more of the connections on computers are showing up on TVs and vice versa. This piece addresses only the actual video connections. Then there is HDTV in so many formats it will make you blind. Well have to save discussion on formats for another Tech Tip. What Are We Up Against? Just as our computers are hooked up to everything under the sun, televisions are expected to interface to lots of electronic goodies including our computers! As we progress from the over-the-air analog TV that was designed pre-World War II, to the latest digital video medium of HDTV, the interface for these signals (and their connectors) have to change to keep up. Heres a list of the video connections you will find on a (Typical High-End TV). F Connector Composite S-Video Component D-Sub (VGA) DVI HDMI If you are building a full-on home theater, you might use a (Video Projector) that adds more controls, such as an RS-232 connector, infrared remote control and
even a jack to control curtains over your projection screen for normal or widescreen modes! F Connector Lets start with the most basic: the F connector is the most rudimentary type of interface used on televisions. Its threaded, with a single contact in the middle, and the cable that connects to it is usually very stiff. This is the signal that comes from the antenna or the cable TV company, usually from a similar connector mounted on the wall. The video rides on a radio signal on one of the television channels. The cable is stiff because it has a metallic shield right under the plastic jacket. Its a coaxial cable (commonly called coax, pronounced co-ax), meaning it has one regular wire in the middle, then that metal shield that surrounds it. This is a good conductor of those very high frequency radio signals that carry broadcast television or cable TV. You want to be careful not to kink your coax or even bend it too tightly. Those F connectors are a bear to screw in when you have to reach around behind a TV. First, you want to get that center conductor of the plug lined up and poked into the hole in the jack. Then, you need to carefully align the plug and push it against the jack as you turn the rotating part of the plug. Dont twist the cable, just the little nut on the plug. The trick to smooth threading of the plug onto the jack is to have it lined exactly up with no sideways pressure. Computers have been seen with F connectors on them too. Check out this (TV Tuner Card) that can turn your PC into a television or even FM radio. Composite The first improvement in video interconnection for home electronics was the composite video interface. It uses a single RCA jack, usually colored yellow, and carries the video signal from a piece of equipment, like a (Camcorder), to the (TV). The audio signal is carried by separate cables, usually color-coded red and white
(sometimes black). The composite video signal is limited by that fact that all the elements that make up the video are crammed into one wire. If you must use a composite interface, dont use any old audio cable with RCA plugs on it, especially if you have to make a long run. Get a decent video cable to keep the quality of the signal up. S-Video Having all the video elements on one wire causes some interaction between those elements, especially the color (chrominance) and brightness (luminance) signals. S-Video (for Super Video, after the Super VHS decks where Video connectors first appeared) runs the color and brightness signals on separate wires in a cable terminated with a 4-pin circular DIN plug. Low-end computer video cards with (TV Out) use an S-Video format.
S-
Component Video (RGB) Your video screen is made up of tiny dots of light called pixels (picture elements), and pixels come in red, green and blue. The color of the pixels can be individually varied to combine into millions of different colors. You get the best image quality when the signals for each of the three component colors are kept separate. Dont confuse component video with composite video, even though they sound very similar. They have the RCA connector in common, but component video uses three separate connectors usually color co ded red, green, and blue. If you are doing a DVD to TV hookup and component, S-Video and composite interfaces are available use the component! It will show a noticeable improvement. For HDTV, its the entry level interface. D-Sub (VGA)
The standard VESA (Video Electronics Standards Association) computer video interface is really a component analog interface like component video, except instead of having three separate plugs, it uses a single D-sub connector with 15 pins just like your computer. Many newer TVs include the D-sub input so they can be used as a computer display. How about hanging a (32 inch TV) over your desk that can double as a huge computer monitor? Play DVDs on your com puter while eating lunch and relax! Even though the (Cheapest Computer Video Card) has a D-sub video interface dont think its low-end performance. Good computer monitors have requirements beyond even HDTV and the D-Sub interface can deliver for computer type displays. DVI Computers generate images on a pixel-by-pixel basis. LCD (Liquid Crystal Display) panels, plasma TVs and DLP (Digital Light Projection) projectors are all digital image devices. Digital television, including HDTV, operates on individual pixels. The conversion to analog, even high-end component video can degrade the performance of these digital images. For the best digital-to-digital connection for your video, get a DVI connection. DVI stands for Digital Visual Interface. A (High-end Computer Video Card) would have a DVI connector. It may come with an adapter to other video formats so you can jack your PC into just about any television or computer display. To make things interesting, there are three types of DVI interconnects: one that does digital-only called DVI-D; one that does analog-only that is referred to as DVI-A; and the other that can do either digital or analog video called DVI-I with the I standing for integrated. There are six different types of DVI pin-outs. The most common have 24 pins in 3 rows. The digital version has a blank area on one side with a flat key. The combo
type has the 24 pins plus it has four more pins arranged in a square with a flat blade ground contact in the middle. DVI-A and DVI-D cannot be crossed over, but either can plug into a DVI-I equipped gear. HDMI Suppose you just bought a new (DVD Player) with an HDMI digital interface on the back. If the above video interfaces arent enough (especially the DVI-x confusion) there is a new digital video interface thats relatively new on the market called HDMI for High Definition Multimedia Interface. Its so new that most people have to use a (DVI to HDMI cable) to hook up any existing DVI equipment to a unit with an HDMI connector on it. HDMI uses a small flat connector that has two rows of pins surrounded by a shell that protects the pins from getting bent. This connector and the cable that goes with it are a lot easier to route in a tight home entertainment center, and you probably can plug it into the back of a piece of equipment just by feel, a definite improvement over other video connection systems. In addition to carr ying High Definition digital video, HDMI cables also carry up to 8 channels (7.1) of digital audio, unlike any other cable here, making them true integrated digital A/V cables. Many video equipment manufacturers are jumping on the HDMI bandwagon and its bound to become the most popular. By the way, if you must plug in a video connector on the back of a PC or other equipment and you cant see the back of the device, get out a small hand mirror and a flashlight. Its much easier and you wont bend as many pins when you can see what you are doing. Adapters As you go about chasing the latest and greatest geek toys, there are bound to be instances when you have to hook up one type of video connection to a different type. Fortunately, many of the video interfaces can be converted with the use of an adapter. For the latest gear, you can use a dedicated DVI to HDMI cable as mentioned above, or you can use an existing cable with an (HMDI to DVI Adapter). If you want
to put your PC with a DVI jack on the back up on your big screen TV, you might have to use a (DVI to Component Video Adapter) if the TV doesnt support DVI. Some video conversions require only these s imple and inexpensive adapters. Others need more expensive converter boxes and probably arent a good idea as the image will suffer. Final Words Both computers and televisions are becoming more versatile than ever and the distinction between the two is blurred forever. Use your computer as an entertainment system for a bedroom or dorm room. Run video games on your computer hooked up to the video projector in the family room. To pull these stunts, you just need to understand these tips on video interconnects and which are best for what and how you can mix and match your video equipment.
Tech Tip 46 - Disaster! (Geeks to the Rescue) Article by Roy Davis The news of late has been full of natural and manmade disasters, including the hurricanes on the Gulf Coast and an anniversary of the 9/11 horrors. In every disaster, it seems the highest priority is communication. To be sure, evacuation, food, water, shelter and medical care are critical needs, but none of it can be delivered without communication between the government and the responding agencies. You might not be the hero that runs into a burning building to save a child, but as a knowledgeable and prepared geek you can still have a big impact. Well talk about some of the issues you might face and how you could use your geek skills to improve the situation. 1. Hurricane Katrina Lesson When hurricane Katrina hit New Orleans the power was lost, cellular and wired telephones went out, and government radio repeaters were silent. The mayor and the emergency operations center had only one communications link to the outside world the Internet. Thats right - the officials had only one Internet connection and fortunately there was a geek present who rounded up some routers and cables and hooked up a makeshift local area network with a bridge to the Internet. Thats how messages got to and from the people in charge for several days. 2. Why the Internet? Why did the Internet survive in the face of disaster when every other major form of communication was disabled? The answer is that the Internet is not a particular medium of communications with a single point, like a telephone central switching office or trunking radio controller, that can fail and bring the whole network down. The Internet makes use of almost any communications medium available. You can be connected to the Internet via a cable, through a dialup telephone line, DSL or cable TV modem, wirelessly by WiFi, Cellular, satellite, EVDO, or even ham radio. The most significant point here is that your Internet address doesnt change, no matter how you are hooked up. When telephone networks go down, your phone number is useless. When a particular government radio channel is out, the workers cant find you. But if they send a message to your Internet e-mail account, you can pick it up a dozen different ways.
3. Hard-Wired Internet Since most commercial and even some home Internet connections are largely run on modern below-ground fiber optic trunks, they are not as vulnerable to damage by wind, water or even fire. In the hotel where the New Orleans officials set up shop, there was one functioning direct-cabled Internet connection still working. The technical person in the group scared up a (Network Switch) to split the single Internet connection out so the officials could jack in their (Laptop Computer)s and start communicating via e-mail. Having a (Cabling Kit) on hand complete with lots of CAT 5e cable, RJ-45 connectors and the necessary wire stripping and crimping tool would allow you to quickly wire up any command post. 4. Telephone Dialup In some situations or locations, you might have to resort to a telephone dialup connection. Though too slow for streaming video or high performance Web surfing, a 28 Kilobit per second (Kbps) telephone connection can handle a lot of text e-mail. If your portable computer does not have a built-in telephone connection, you could use a (USB Telephone Modem) to make the connection in an emergency. 5. WiFi WiFi, or 802.11 wireless networking is very good magic in an emergency. You dont need to run a cable to a laptop computer if you set up an (802.11g Wireless Router). This small box splits out an Internet connection, be it hardwired, cable or DSL modem, or whatever, and provides a short-range radio connection to computers within a few hundred feet. Position the router in a clear spot so the radio signals have a chance to radiate toward the computers to be served. If the router has an external antenna or two, keep the antennas approximately vertical. Computers can be moved around and new machines added to the network almost instantly. Many laptop computers come with WiFi built in, but for a computer that doesnt have it, you can use a (USB WiFi Network Adapter). If it is running Windows XP then just plugging it in should install the device and you can select the wireless network from the popup menu. This USB adapter comes with a USB cable extension so you can position the adapter away from your computer to better receive the radio signals from the router. Line-of-sight to the router is best, but at least get the adapter with its built-in antenna away from objects like metal desks or file cabinets. 6. WiFi Antennas WiFi equipment is designed for short ranges and the signals dont penetrate walls or floors very well. You can extend the range of your wireless network by using an (802.11G Range Expander) strategically positioned between the router and the computers to which you are trying to connect. Putting the range expander in a window, doorway or stairwell can spread the WiFi signals beyond a wall or floor that would normally stop them. If you have a need in an emergency situation to go beyond the nominal range of WiFi, you can add a (Range Extender Antenna) to the computer, the router or both. Keep in mind that nothing is free and a directional extender antenna can go a longer distance,
but only cover a narrow angle. First, try putting an antenna on the computer end of the link and point it at the router. The omnidirectional antenna on the router can then continue to serve other computers closer to it in any direction. If you have to extend the range by using a directional antenna on both ends of the link, then use a separate router to serve the close-by computers. 7. Amateur Radio WinLink 2000 Licensed radio amateurs (not to be confused with unlicensed Citizen Band radio) have a long tradition of providing communications assistance during disasters. In fact, the (Amateur Radio Relay League) got started relaying messages copied by hand from radio operator to radio operator. The concept was very much like the Internet where messages could pass through the network of stations by many different paths making it much more robust than commercial communications systems. Radio amateurs (also known as hams|) have recognized the value of the Internet and have developed a method of relaying Internet packets over their radios called WinLink 2000. A radio operator outside the disaster area can act as a gateway station to connect to the rest of the Internet. Small portable stations can be brought into the disaster area to provide a link even if all other forms of Internet connection are gone. You can help out by earning an amateur radio license and joining a local emergency radio organization like the ARES (http://www.ares.org/) or RACES (http://www.races.net/). Check the (ARRL Web site) to see how you can extend your geek skills even further. 8. Power to the People In the emergency command post or even a shelter set up in a school gym, you will have a cluster of folks helping out, and they need power for their laptops and all the geek gadgets that it takes to hook them up. Dont take power and the availability of outlets for granted. Use a (Power Strip with Surge Protection) to split out the end of an extension cord. You need the surge protection because there may be lightning storms and wind that can short power lines causing spikes on the line. In the worst case, you might be running your equipment off a portable generator. For an evacuation shelter, you would want a (Medium Duty UPS) while a command post should have a bigger backup with a (Heavy Duty UPS). Generators have to be shut down for refueling and the Uninterruptable Power Supply will keep your computer and network equipment running for several minutes while the generator is down. Also, the output of most portable generators is very dirty and the surge protection built into the UPS will protect your valuable equipment. 9. Internet Takes Over for Newspapers Its not only government officials and rescue agencies who are resorting to the Internet in times of disaster. Newspapers in the path of hurricane Rita announced that they were limiting or suspending publication of their hardcopy newspapers, but that readers could continue to get news via their web sites. The Galveston County Daily News wanted to protect their employees by keeping delivery personnel off the streets. The Port Arthur News cancelled publication and abandon their offices, but updated their web site news. Again, this is an example of the independence of the Internet from physical plant and particular locations. If you are waiting out the storm, you might want to have a (PDA with
Bluetooth) to connect to the Internet via your cell phone to keep up on the news and weather reports. Final Words A little preparation can put you in a position to help yourself, your family and your community in the case of a disaster. Stock up on canned food, water, and flashlight batteries, but dont forget the capabilities of the Internet at providing communications in time of need. As an experienced geek, you can get it together when the crunch comes. Speaking of getting it together, having a (Transport Case) handy with a list of the equipment and accessories you need can make packing quick when you cant afford to forget anything, and when there are no local stores left open for last minute pickup.
There are many products available that make your computer seem less like a business tool and more like a home media center. Given this, the one component that may lack the popularity you might expect is the TV tuner card. There are sound cards capable of 8-channel surround sound stereo, highpowered speaker systems, and graphics cards displaying on high definition monitors/televisions. TV tuners are becoming more common in computers as Windows Media Center Edition grows in popularity, but just about any computer with any operating system can tune in TV. This Tech Tip will look at some of the basics of computer TV tuners, including the interfaces, the technologies, and the performance. Watching Programs The interface with most TV tuners should be familiar to anyone using one, as the manufacturers seem to do their best to make things look and act much like a standard television. Although features will vary from brand to brand, the on-screen display and controls generally look much like a basic television, but now we have t he TV inside a window on your computer screen. You can run the TV application in one window, while surfing the web or actually doing some work in another. The size of the screen can be stretched to just about any size to fit your tastes, and most tuners offer the option to go to full-screen mode, where the image will take over the entire screen. In addition to on-screen controls that can be manipulated with the mouse and keyboard, many TV tuners include a wireless remote control that could easily pass as the remote for a standard TV. This tuner (http://www.geeks.com/details.asp?invtid=SBT-TVFM&cat=VCD) from
Geeks.com features such a remote control, and by enlarging the image on that page, you can see that it includes buttons for the basic operation of the TV. Changing channels, adjusting volume, and recording a program are just as easy on your PC as they are with your TV and VCR. The things you are used to doing on a modern TV can be done here: assigning text tags to channels (i.e., making channel 36 show ESPN when selected), adding or deleting channels from the lineup, and so on. Recording Programs The interface for recording on most TV tuners is about as intuitive as that for watching programs. Taking the general functionality of a typical VCR or DVD player, onscreen and remote control based controls make easy work of recording any program. Many tuners take advantage of an online programming resource, such as Titan TV (http://www.titantv.com/), that make programming your computer to record a show about as easy as operating a personal video recorder (PVR), and generally much easier than operating an actual VCR. Titan TV provides program listings for just about all locations which can be browsed by the user, but more importantly they can be retrieved by the tuner in order to program a recording with ease. The quality of the recording can be varied with most tuners to allow users to store the files with their preferred balance of audio/video quality, file size, and file format. It may be a process of trial-and-error to find which settings work best with your computer hardware and personal preferences, but the options are generally there. Shows can be recorded in rather basic modes that wont take up too much disk space, in high quality modes that will require better computer hardware and more disk space, as well as many stages in between. It is best to seek out reviews, such as this one (http://www.bigbruin.com/reviews/leadtektv2000/index4.php), that show what formats and qualities can be expected with specific tuners, as there are many variables that might need to be considered. Capturing from Other Sources Many tuners are also listed as capture devices, as they do much more than just tune in television programs. In addition to having a coaxial con nection for receiving the television signal, most include other ports (S-video, composite, or component) for hooking up items such as a video camera, VCR, DVD player, etc.
The software provided offers options for capturing from these various sources, much like it allows for recording television. The quality of the recording can be configured to suit your needs and computer hardware capabilities, and all your other video sources can be saved to disk (well, maybe not all, as some copyright protected content cant be recorded legally). Having the ability to capture from other sources is quite convenient as it can allow for old home movies or VHS tapes to be backed up to your hard drive. These files can then be watched on your computer, or with the appropriate authoring software, they can be burned to a CD or DVD for use in any DVD drive or standalone player. Tapes will eventually wear out, but being able to archive such recordings to disk can help preserve them forever. Computer Interfaces PCI All of the tuners presently in stock at Geeks.com (http://www.geeks.com/products_sc.asp?cat=863) are of the PCI variety, and this may be the most popular configuration on the market. Most computers have at least one available PCI slot, and the speed of this aging interface is still more than adequate for the demands of viewing/recording TV. The technology in many of the PCI tuners available today is the same as it was years ago, but then again, the sam e can be said of your typical television, too. External (USB/Firewire) Some manufacturers offer external TV tuners that connect to a computer via a high speed connection such as USB 2.0 and Firewire. With the high transfer speeds possible, and the ease of installation, an external tuner is an excellent choice for any computer system. Where a PCI-based tuner can only be used in a desktop computer, an external tuner lends itself to being used with either a desktop or a laptop, make them a far more flexible option. Some tuners are/were available as USB 1.1 devices, and the performance of this interface can hinder the quality of recordings and even playback. When selecting a USB-based tuner, it is important to make sure that it is specifically USB 2.0 so that you dont wind up with choppy audio/video. The AverMedia UtraTV 300 (http://www.aver.com/products/tvtuner_UltraTV_USB_300.shtml) is an example of a USB 2.0 based tuner for Windows systems, and the Elgato Systems EyeTV 200 (http://www.elgato.com/index.php?file=products_eyetv200) is an example of a Firewire- based tuner for Macs. AGP/PCIe ATI (http://www.ati.com) manufacturers typical TV tuners for PCI and USB interfaces, as well as their line of All-In-Wonder (AIW) cards available
for AGP, PCIe, and previously PCI. These AIW products combine a TV tuner with a graphics adaptor on one board, allowing you to save an expansion slot on your motherboard without sacrificing quality on either the graphics or TV tuning side of things. Some of the latest high-end graphics cards from ATI are available as standalone models, as well as AIW models, such as the Radeon X800XL (http://www.ati.com/products/radeonx800/aiwx800xl/index.html). TV Technologies NTSC/PAL The standard television signal in the United States, whether it is cable or antenna, is NTSC (National Television System Committee). PAL is a similar technology used in Europe, and the main difference between the two analog standards is that NTSC offers 525 lines of resolution at 60 half frames per second, while PAL offers 625 lines of resolution at 50 half frames per second. If you want to view television programming from a standard cable connection or an aerial antenna, you would want an NTSC/PAL based tuner. Presently all of the tuners in stock at Geeks.com (http://www.geeks.com/products_sc.asp?cat=863) would fit this bill. HDTV It has been mandated that all US broadcasts eventually be in high definition (HD), and many areas already have over-the-air and cable HD programming. The quality of high definition video (and audio) is far superior to NTSC or PAL, as the signal is digital and resolutions can be expected to have either 720 or 1080 lines. Just as a special HDTV is required to take advantage of the better picture quality, a special TV tuner would also be required to take advantage of HDTV on your PC. There are several HDTV tuners available, and the ATI HDTV Wonder (http://www.ati.com/products/hdtvwonder/index.html) is a PCI-based device that can tune in over-the-air HD broadcasts, as well as standard NTSC broadcasts from cable or an antenna. Note that it only handles HD broadcasts through an antenna, as most other forms of HDTV require a proprietary tuner from your cable or satellite TV provider. Because computer monitors are capable of resolutions much higher than even HDTV signals, just about any monitor will work. The resolution may need to be adjusted though, as for example, 720 lines of resolution in HDTV would require the monitor to be capable of at least a native resolution of 1024x768. You just need the second number of the displays resolution (the vertical component) to be equal to, or greater than, the HD signal you want to display. Stand Alone Tuners for Monitors In addition to the devices mentioned so far, there is a similar type of product that can allow TV to be shown o n a computer monitor. These devices eliminate the
need for a computer, as well as the ability to capture video or record programs, but allow the user to turn any computer monitor into a television. The NextVision N6 from ViewSonic (http://www.viewsonic.com/products/accessories/ tvvideoprocessors/nextvisionn6/) is an example of one of these devices that combines a cable box that outputs a signal compatible with a computer monitor, with a few other special inputs for use with other components like console game systems, DVD players, and VCRs. Final Words TV tuners add a whole new dimension to any PC, and dont have to cost much to do so. With the ability to watch TV full screen or in a small window, as well a recording programs much like commercially available personal video recorders, they can be quite a convenient accessory to have.
Tech Tip 48 - HTPC Pointers, Part I HTPCs (Home Theater Personal Computers) are gaining in popularity as people spend more time and money getting comfortable in the living room, and as the availability and understanding of the technology becomes more user friendly. An HTPC can be used for a variety of things, including video/audio playback, streaming online media, big screen gaming, and watching/recording television, all of which make them quite appealing devices to have. HTPCs dont have to be much different than your typical computer, but a few key areas do need to be addressed. Just as with a typical desktop computer, there are limitations to what certain configurations can achieve, but customizing the hardware and software can enhance the experience to the point where it may threaten the existence of some of your more traditional home electronics components. This two-part series of Tech Tips will cover a few pointers related to getting started with your own HTPC. There are obvious considerations that will be different from person to person (such as budget and existing home theater components), but in general, there are a few things that should be addressed by any prospective HTPC builder. In this part of the series, we will take a look at the two biggest things to consider: video and audio.
Video If this is a theater, you had better be able to see something! Most graphics cards come configured to display on monitors using either a 15-pin VGA cable, or using the newer DVI cable. Although many new televisions, specifically HDTVs, have a DVI connection that may accept the signal from your computer, other models may require an alternative connection.
Most graphics cards with TV-out connections are capable of sending signals out over a variety of different interfaces, so choosing the right card can provide greater compatibility as your system is upgraded. The typical connections include VGA, DVI, composite, S-video, and component, and if you look around, you can find cards that support all of these interfaces. On the end view of this GeForce 6600 card (http://www.geeks.com/details.asp?invtid=V6600512PI&cat=VCD), you can see a typical TV-out connection. The round port to the left of the DVI and VGA connections accepts a video breakout cable which allows the signal to be transmitted over either an S-video or composite cable. The combination of S-video and composite is the most common offering on graphics cards, as many televisions (even much older models), can accept these cables and will allow the computers signal to be displayed on screen. The issue with S-video and composite is that they do not provide the best video quality, and the home theater experience may suffer. Regardless of resolution, text may be hard to read, the screen may flicker, the images look slightly skewed, and overall, you will be wishing for something better. That something better can be found through the use of component video cables. Not many video cards support them, but those that do definitely have the HTPC user in mind. The signal over component video can be of excellent quality, and will support the highest resolutions of HDTV signal (if your card and TV are both capable). Although the specifications dont specifically indicate that it supports it, the description does mention that this GeForce FX 6600GT
card (http://www.geeks.com/details.asp?invtid=PCIE-OCTFX6600GT128&cpc=SCH&srm=0) includes a component video cable. XGI Tech is a manufacturer of video cards that supports the use of component video, and even budget cards like the V3XT (http://www.xgitech.com/products/products_2.asp?P=8) (less than $50) can give the owner the ability to send a 1080i or 720p HDTV signal out to their compatible TV/monitor. HDTVs are without a doubt the way televisions are headed, so people should consider having their HTPC be prepared for this, whether they have the necessary TV yet, or not. Sending an HDTV signal from your computer to an HDTV ready television requires not only the right connections (generally DVI, component, and now HDMI), but a card capable of producing HDTV resolutions. HDTV, especially widescreen HDTV, has a different resolution than your typical monitor, so trying to reproduce your standard desktop resolution on an HDTV may not look so hot. For example, distortion will occur if you try to use your standard 1024x768 or 1280x1024 (4:3) monitor resolution on a 720p HDTV which is looking for a signal at 1280x720 (16:9). Cards that support true HDTV output will be sure to advertise it, and are worth investigating for an HTPC to be coupled with an HDTV.
Getting a video signal out of your HTPC is mandatory, but getting a video signal back into the HTPC may be just as important to some. TV tuners and capture cards (http://www.geeks.com/products_sc.asp?cat=863) are gaining in popularity in all sorts of computers, and seem especially at home in an HTPC setup. An HTPC with decent processing speed and ample hard disk space can easily store hours of recordings from sources such as VHS, DVD, and perhaps most importantly, television. When people think of watching or recording television, they associate this as an activity that occurs in the living room, since the VCR or PVR (ie. Tivo) are traditionally located there. With an HTPC, you can use a TV tuner card and some fairly basic software to provide the functionality of a PVR without an extra component in the rack, and without the additional monthly service fee that a service like Tivo requires.
The recent Tech Tip on TV tuners (LINK TO TV TUNER TIP not up yet) covers all of the basics of these devices, which will hold true for use with desktop computers as well as with HTPCs. Your typical TV tuner allows you to receive your usual programs from an antenna or cable, and with a card like the ATI HDTV Wonder (http://www.ati.com/products/hdtvwonder/index.html), you can even watch and record HDTV. The software included with TV tuners is all somewhat similar, as it mimics the look and feel of your TV/VCR, and provides fairly intuitive controls for watching and recording programs. Although many offer options for programming your recordings like a Tivo might do, there is other software available, and for even more information, visit a resource like Build Your Own PVR (http://www.byopvr.com). Audio After video, audio has to be the next most important thing to consider in a theater setting. If you intend to enjoy movies, games, or streaming audio/video, you need to be able to hear it. The main considerations here involve choosing an adequate sound card and choosing the speakers it will drive. Higher end HTPCs will most likely tie into the traditional stereo components found in the living room, and use the speakers that already handle the audio responsibilities of VHS/DVD movies and CD audio playback, among other things. You could use even the most basic of sound cards (http://www.geeks.com/details.asp?invtid=L8738-4C&cat=SND) for this by adapting the 1/8 stereo jack to two RCA jacks to be plugged into the back of the receiver. The sound quality from this arrangement may pass for some, but it will not be nearly as good as it could be, and you will have reduced any surround capabilities down to two channel stereo. Some sound cards offer digital audio connections that allow for the signal to be sent to the component receiver without being converted to/from analog before playback. This eight channel sound card (http://www.geeks.com/details.asp?invtid=L-8768-8C&cat=SND) provides SPDIF (Sony/Phillips Digital InterFace) connections to handle such tasks. You will
obviously need a receiver that can accept SPDIF, and the sound quality will be greatly improved as compared to the 1/8 jack adapter method. Motherboards used to provide onboard audio that was considered the last resort for those not yet ready to spend any more money on a decent sound card. Things have definitely changed, and many of todays boards offer high quality audio processors with multiple channel surround sound, and some like this Albatron nForce4 SLI board (http://www.geeks.com/details.asp?invtid=K8SLI&cat=MBB) even offer SPDIF connections onboard. Some sound cards, such as those in the X-Fi Series (http://www.soundblaster.com/products/product.asp?category=1&subcategory=2 08&product=14064) from Sound Blaster, take audio to an even higher level with refined controls, multiple digital/analog connections, and other professional quality features. If you dont have a component audio system to tie into, or if you just want to use speakers intended for use with a computer (http://www.geeks.com/products_sc.asp?cat=790), you could set up a dedicated system using the 1/8 stereo jacks as you would at your desk. The quality of computer speakers has improved greatly over the years, and the sound from some of these mini-systems can rival that of many bookshelf component systems. Combining a surround sound capable sound card with a set of 5.1 channel speakers, such as the Logitech X-530s (http://www.geeks.com/details.asp?invtid=970114-0403-DT&cat=SPK), is a cost effective way to get a decent sounding system that wont break the bank. The choices in surround sound computer speakers have grown greatly in recent years with a variety of styles, arrangements, and power levels to suit just about any taste. For example, if the 70 Watt output of the X-530s isnt enough, you could ramp things up to something like the Logitech Z-5500s (http://www.logitech.com/index.cfm/products/deta ils/US/EN,CRID=2177,CONTENTID=9486) sporting 500 Watts in a fairly compact 5.1 channel system. Not all HTPCs reside in the living room, and this sort of setup might be an excellent match for a bedroom, dorm room, or other locations where you dont need a big sound to fill the room with realistic audio effects.
Final Words Video and audio are by far the most important things (in my opinion) to address when setting up an HTPC, but there is more to consider. In the second and final part of this series, we will look at other areas that can set an HTPC apart from your typical desktop computer and really enhance the experience.
Tech Tip 49: HTPC Pointers, Part II Article by: Jason Kohrs
In the first of a two-part series of Tech Tips on getting started with a Home Theater PC or HTPC, we looked at the two most basic features to be considered: audio and video. With those areas addressed, there are still plenty of aspects worth considering that can help your computer become better integrated into your home theater for a truly enjoyable experience. Some of these aspects include component speed, cooling, noise, style, and the user interface. Component Speed As with any other computer, the speed of the components can be a concern for an HTPC. It all depends on exactly what functions the user has intended for their particular unit. If they are only interested in audio/video playback and functions like viewing photos and web browsing, even the most basic of computers may suffice. A
system solely for DiVX / DVD video and MP3 audio playback would work fine on just about any system, perhaps even an old 400 MHz system from the late nineties (if you have such a machine around at this point). The only additional items that would be necessary are a compatible AGP/PCI graphics card with TVout and a sound card. Sure, the performance might be up to par for todays applications, but it doesnt take much to handle the basics. Multi-tasking on such a system might result in jittery video or lagging audio, so dont ask an HTPC built around past generations of technology to do too much at once. Upgrading a system from these meager starting points only improves the capabilities of an HTPC, and may not cost all that much either. Considering the minimum required for handling the basics, you may even be able to retire your old desktop PC to the living room and still be satisfied with the performance. Cutting edge is not necessary, at least not always. If you want to play video games, the requirements change and you will obviously need a processor, memory, and video card that match the minimum requirements published by the game manufacturer. Some of the required hardware, or suggested minimal hardware for reasonable performance, for games today is enough to have the casual game player thinking twice about such an investment. Playing video games on a big screen is an incredible experience, and takes things to a whole new level when compared to game play on your typical desktop monitor. Being able to see things in a game on a larger scale, and with the detail offered by an HDTV (if available), really makes these capabilities desirable. Its just a matter of deciding how modern of a system is needed, and whether it is an affordable path to take with your HDTV. One other area of HTPC use that may be speed critical is capturing video and TV. Not only do you need a processor capable of handling the conversion of the video signal to digital data, but you also need a hard drive that can keep up, as well. Asking a system with a slow hard drive (or processor) to capture any length of video can result in a recording that has an audio/video sync issue or other glitches in the playback. Modern hard drives with SATA-150 (http://www.geeks.com/products_sc.asp?cat=430) or even ATA-100/133 interfaces should be fast enough, and multiple hard drives may really speed thing up. One method of increasing the performance is to have one hard drive run the operating system and capturing software, while a second hard drive is only used to store the captured files. Another option would be to install your drives in RAID 0, as described in this Tech Tip (http://www.geeks.com/pix/techtips-012705.htm), which effectively doubles the speed at which your drives can read/write data. Cooling & Noise
Cooling and noise will be addressed together, as the bulk of the noise from a computer system is generated by its cooling solutions. Keeping an HTPCs components nice and cool is just as critical in a desktop PC, and maybe more so, as you dont want your system to overheat and crash in the middle of your favorite movie or game! That said, noise that seems minimal when produced by a typical desktop computer may seem excessive in an HTPC environment. If you are watching a movie with a particularly quiet scene, you do not want that silence punctuated with the hum of fans or the whir of drives. A previous Tech Tip (http://geeks.com/pix/techtips-011305.htm) on quieting your PC provides tips appropriate for a desktop computer, as well as an HTPC, making it a resource worth referencing. The bulk of the tasks handled in this environment are not overly demanding, and since older components can handle it, newer components can do so without breaking a sweat. But, things like game play and other multimedia applications can be quite processor, memory, graphics, and hard disk drive intensive, so the components may generate a decent amount of heat that needs to be dissipated. Some low noise cooling steps are listed below. The easiest answer is to use larger fans (120mm (http://www.geeks.com/details.asp?invtid=CF120BL&cat=FAN)) that can spin at lower speeds to move the same volume of air as smaller fans (80mm (http://www.geeks.com/details.asp?invtid=LP400BL&cat=FAN). In general, slower spinning fans generate less noise, while larger fans are capable of moving more air due to their larger area. This, of course, requires a case that can handle larger fans, such as this one (http://www.geeks.com/details.asp?invtid=KG-318-RED&cat=CAS), that has mounts for exhaust fans at 80mm, 92mm, and 120mm in size. Choosing a case will be discussed more in the section on style, but this is another area that can impact cooling. Any case could pass in an HTPC environment, but choosing one optimized for efficient cooling is a good idea. If an HTPC is to be mounted in a component rack, it will need to dissipate its own heat while in a space filled with other heat generating components. Adding a fan speed controller (http://www.geeks.com/details.asp?invtid=CWBK&cpc=SCH&srm=0) is another approach to taming the roar of any size fan. The controller shown at the link provided controls up to 7 fans, so you could easily connect all of your case fans, processor cooling fan, and whatever else may be actively cooled. By using the integrated thermal probes and LCD thermometer, you can then monitor the temperatures of your
critical components and regulate the fan speed for a perfect balance between safe operating temperature and low noise output. Water cooling is another option for keeping system components running cool and quiet. Basic water cooling kits (http://www.geeks.com/products_sc.asp?cat=879) can be installed that cool the CPU, as well as perhaps the video card, motherboard chipset, system memory, and hard drives. Where you might have had a cooling fan on every component mentioned, you would now have one fan on a radiator that cools the water that circulates over these items. Even one fan may be too many, as some high-end water cooling kits, such as this one from Zalman (http://www.zalman.co.kr/usa/product/view.asp?idx=160&code=021), are passively cooled and dont have fans. Power supplies are another source of noise, as they generally employ fans to keep their internal components cool, as well as helping to cool the systems case by drawing air out. Many power supplies are now designed with one larger fan (http://www.geeks.com/details.asp?invtid=AP550W&cat=PWR) to provide the necessary cooling with minimal noise, while others can now be purchased with no cooling fan at all (http://www.antec.com/us/productDetails.php?ProdID=24350). Since power supplies with fans may contribute to a cases cooling, selecting a fanless power supply may require additional efforts (more airflow) inside the case to things at a comfortable temperature. Although cooling fans provide the bulk of the noise, drives may contribute as well. Many hard drive manufacturers now offer downloadable utilities, like Hitachis Feature Tool (http://www.hitachigst.com/hdd/support/download.htm), that allow the owner to change the acoustic profile of their hard drives. Decreasing the noise output may impact drive performance, but it may be worth it in many situations in order to silence the system. Style As I mentioned previously, just about any computer case will do when it comes to the basics of an HTPC, but style may be an important facet of your build. Many manufacturers now offer computer cases that mimic the design of traditional rack components, making your HTPC blend in with the rest of your home theater gear. Thermaltake (http://www.thermaltake.com/xaserCase/tenor/tenormenu.htm), Ahanix
(http://www.ahanix.com/), and Silverstone (http://www.silverstonetek.com/product-case.htm) are just a few of the brands focusing at least a part of their product lines toward HTPC enthusiasts. In general, an HTPC style case will have a horizontal configuration so that it will fit in your component rack. It will offer a stylized front face to mimic typical A/V components, the stock cooling solutions will be optimized for low noise output, and many times the chassis material will be aluminum to aid in the dissipation of heat. All of these special features are not cheap, as a good quality HTPCspecific case may cost several times what your typical mid-tower case would. For those with a creative side, modifying a more basic case may be more rewarding and a great deal less expensive. Perhaps a bit of cutting to optimize airflow, a coat of (silver or black) paint, and a few accessories should do the trick. Many of the options found in an off-the-shelf HTPC case can be purchased separately, so you could add things like a vacuum fluorescent display (http://www.thermaltake.com/xaserCase/medialab/medialabmenu.htm), stylized optical drive bezels (http://www.coolermaster.com/index.php?LT=english&Language_s=2&url_place= product&p_serial=AFP-U01&other_title=+AFP-U01+Alloy%20Front%20Bezel), and just about any other finishing touch required to turn your basic desktop case into an attractive HTPC case. User Interface All that is really important regarding the user interface is that you can see well enough to access the applications you want to run. If you have the screen size / resolution to do so, even your typical Windows desktop will be adequate for interacting with your HTPC. The Windows desktop may work fine, but it has more than you need on it for basic HTPC applications, and it generally looks more like a work environment than a play environment, so there are ways to address that. There are Windows and Linux based shells to make the HTPC interface more user friendly, and these generally involve the use of large icons and text that provide access to only the most common application: MP3 player, DVD player, web browser, image browser, and so on. These shells add functionality to your typical desktop operating system that can
make them much easier to navigate in an HTPC setting, regardless of the TV size/quality being used. For a free and easy to use HTPC shell for use with Windows XP, check out Media Portal (http://mediaportal.sourceforge.net/). If you dont want to install a shell on top of an existing operating system, or need to buy a new license of an OS for your HTPC anyway, Microsoft recognized the emergence of this segment of the market, and has something for you (of course they do). Microsofts Windows XP Media Center Edition (http://www.microsoft.com/windowsxp/me diacenter/default.mspx) may be the most familiar name when it comes to HTPC specific interfaces, and it combines an easy to use shell with many other backend features that make it custom tailored to a multimedia existence. There is another aspect to the user interface that is also a bit different with an HTPC. Your typical mouse and keyboard will obviously still function with a computer in your living room, but do you really want to be constrained to within three feet of it? A wireless setup is ideal, as it allows the computer to be located with the rest of the electronics, while you can be comfortably seated on the couch across the room and still have full control. There are numerous wireless mice (http://www.geeks.com/products_sc.asp?cat=562) and keyboards (http://www.geeks.com/products_sc.asp?cat=554) on the market, but for basic interaction, perhaps a combination unit would be the best bet. There are some wireless keyboards with a small joystick-style pointing device that lets one convenient device handle both tasks. These may be great for basic system navigation and launching of applications, but considering video games once again implies better hardware. You can still go wireless for serious game play, but the precision of the components is more critical in a high-paced game, and a high-end wireless mouse like the Logitech MX1000 (http://www.geeks.com/details.asp?invtid=931175 -0403-DT&cat=MOU), for example, may be worth the extra money. Another way to interact with your HTPC takes on a feel more familiar to components found in the living room: a remote control. The Ahanix iMon (http://www.bigbruin.com/reviews/imon/) is one example of an infrared remote
control, much like you would use on your TV or DVD player, but fully capable of controlling your computer. It allows for control of mouse functions, typical multimedia controls (play, stop, volume, etc.), as well as programmable buttons for launching your favorite applications. We have previously discussed the ATI HDTV Wonder (http://www.ati.com/products/hdtvwonder/index.htm l) for its video capturing abilities, but the included remote is also a nice feature as it handles enough desktop features that it may minimize the need for a mouse or keyboard, as well. Final Words By no means an exhaustive reference for building an HTPC, this Tech Tip hopefully touched on a few areas worth considering by those looking to bring a PC into the living room. HTPCs dont necessarily have to be powerful or expensive, but the possibilities only increase as the horsepower does, and hopefully this series of tips is a good starting point for a variety of interest levels in the home theater experience.
Tech Tip 50 - Scanning Photos and Film Article by Roy Davis You would think that chemical film photography is dead with all the digital cameras flying off the dealers shelves these days. I certainly havent taken as many snapshots on film as I used to, but there are certain situations where clicking off some exposures on film cant be beat. Maybe its when you are on vacation and its raining, or you are going to the beach. You dont want to get your expensive camera wet, (salt water is especially damaging to electronics), so grab a cheap disposable camera to catch those precious moments. 1. Chemical Film Wont Die Digital photography has many advantages on the convenience side. You dont have to buy film and the photos are available for viewing immediately. You dont have to wait for a film processing house to send back your slides or even the One-Hour Photo to crank out your prints. Also, digital photography is cheap. Once you own the camera, you just have to feed it batteries or the occasional memory card upgrade to expand your picture-taking capacity. When digital cameras first appeared on the market, the resolution was pretty coarse. Digital cameras have improved tremendously in that department and high-end digital cameras rival 35mm film for image detail. The one area where digital
camera sensors are left behind is tonal range. Chemical film can have a range of tones that covers hundreds or even thousands of times more variation from dark to light. In an image where film can capture shadow details while maintaining highlights, a digital camera will show only black blotches for the shadows and blown out bright spots. Digital photography is great for a scene with nice bright and flat lighting, but when it comes to the tough situations, film wins hands down. A subject in the shade of a tree with bright sky in the background will either have lots of the shadow detail lost, or the blue sky will turn white in the digital rendition. Film can capture this scene if photographed carefully. Where I still use a film camera is for underwater photography. Catching the eel hiding under the overhanging coral would be lost on a digital sensor. With film, you can pull old Mr. Eel out of the darkness. 2. Viewing Images from Film Capturing wide tonal range on film is great, but only if you can see the range of light to dark. One way to do this is to project light through the film itself. Its almost a forgotten ritual, viewing slides from a vacation trip, but its hard to beat the visual impact of blasting 500 Watts of light through a 35mm color slide in a darkened room. The clear film certainly can make eye piercing bright white while the silver (the metal in the film) image can block all but a tiny bit of light leaking through. The tonal range from white to black is much more than our LCD screens can manage. Using chemical means to print an image on photographic paper also exceeds what our printers can manage when making color prints. The specially treated ultra white paper and the dense color (or black) chemical dyes produce brighter whites and darker blacks than our photo printers.
3. Distributing Images As good as slides are, there is a reason the slide projector got sold in the last rummage sale. No one wants to sit captive in a darkened room while a tray of 100 slides clunks by at a ten-second interval. The fivepound photo album sits on the shelf gathering dust as the prints fade away. Thats because everyone has their face in a computer screen. Photos are sent by e-mail to the folks back East. People dont build photo albums; they produce personal Web sites for friends and family to s ee their latest shots. No matter how a photographic image is acquired, people want to distribute those photos electronically, and that means turning the image from print or film into a digital computer file. 4. Making the Conversion If you want to shoot chemical film so you can make prints that win prizes at the photography show, but want to post copies on your Web site, then you have to convert those analog images to digital. Probably the hardest part of the conversion process is deciding which way to do it. You have several choices including commercial processing, flatbed or handheld scanner, or full-on film scanner. 5. Taking a Picture of the Picture It might be tempting to just whip out your digital camera and take a picture of a photo print to get that digital image file, but you may be disappointed with the results. Its hard to get the focus just right, and you really need a copy stand to hold the camera at just the right spot so the image doesnt come out with a keystone effect. You have to be right over the center of the print to get it square. Of course, the color is going to be shifted by the ambient light during the copy process, and the tonal range is going to be degraded way below what the film can capture and the print can reproduce. 6. Commercial Processing The chicken way out is to have the film processing house scan your film at the same time that you have it developed. If you have well exposed images that you only need low-resolution files for a Web page, then you can get by with this. I
tried the commercial processor route once on my underwater photos and was very disappointed with the results, then went out and spent the money on a film scanner. Well get to that one later. 7. Handheld Scanner At one time, handheld scanners were all the rage for the low end of the photo scanning market. You would grip the handles of this gadget, pull it across the snapshot and hope the picture didnt slip. Getting the scan even by a constant slow speed was a skill most people couldnt master. Flatbed scanners have dropped in price to the point where there is no point in fooling with a handheld scanner unless you were happy with low quality and need a portable device to throw in your laptop computer bag.
8. Flatbed Scanner The entry level way to scan your photos is to use a flatbed scanner that is also useful for scanning pages out of books, magazines, or any sort of document. Some flatbed scanners can handle only photographic prints, while the high-end Canon CanoScan 8400F can pull a decent image out of a slide. The trick is the 3200 dpi sensor. For every inch of image width, the scanner has 3200 individual sensors for each of the three colors. Thats true optical resolution of 3200 dots per inch. You will see cheaper scanners with very high output resolution, but it is often interpolated resolution. They take a lower resolution sensor and calculate the average between two adjacent pixels to invent a new pixel in between. This sort of works once, but you will see bottom-end scanners where they have interpolated between the interpolations to produce an artificially high resolution specification. Its better to pay a bit more for a higher priced scanner that has true 3200 dots per inch resolution.
A cheaper alternative is the HP Scanjet 4070 with a photo door. This unique feature holds a stack of snapshots for quick scanning without fooling around with trying to position the prints just right and manually setting the limits of the scan. Photographic prints can be of excellent quality for viewing, but still lose detail and tonal range over the original film. Of course, a badly processed print from the corner drug store one-hour lab is going to yield a poor quality digital scan. Make sure you use a quality processing lab for both your film and prints if you want decent digital images. 9. Film Scanner If you are serious about turning chemical film images into digital scan files, then you are going to want a dedicated film scanner. These units are specialized for extracting the maximum quality image from film, and high-end models even have batch processing features that save you a ton of time. The other major advantage of film scanners is going directly from a film negative to a digital positive image. One film scanner that would make you happy is the Prime Film PF1800AFL. It can convert slides or strips of uncut film negatives up to 40 exposures long. With a USB interface, this unit is a snap to hook up to most computers.
The Mediamax Workscan 3600 Pro film scanner has higher resolution (3600 dpi optical) and the same batch scanning capability. It handles both slide and negative film too. This unit uses a firewire interface so you probably need a IEEE 1394 firewire adapter for your desktop computer or a PCMCIA firewire adapter for your laptop. The file size of uncompressed photo image is 102 megabytes so you can probably use some extra storage capability like a 300 gigabyte hard drive.
The clear advantage of film scanners is that they excel at capturing the tonal range of normal print film and converting the images into a digital file. Slide film has wider tonal range than digital image sensors and negative film (normal print film) has a huge range of tonal scale. For optimal results, you need to adjust the digital capture levels before the scan to compress the tonal range down so the shadow details are still there without blowing out the highlights. Final Words Digital photography is now the mainstream way of taking pictures for most of us geeks, though more film cameras are still sold around the world. It helps that film cameras can be incredibly cheap (as in disposable), and still take pretty decent photos. On occasions like trips to the beach, many of us prefer to tote a film camera instead of the nifty digital. Since film still dominates in the high-quality image department, being able to convert from film to digital is an important function to master. Using a good quality dedicated slide and film scanner capable of producing image files of super high resolution and a tonal scale that cant be captured any other way is the key to capturing and preserving those special prints and negatives.
Tech Tip 51 - Computer Cooling Tips, Part I Article by Roy Davis Heat. We all love the heat during the summer, lounging out by the pool or basking in the sunlight on the beach. But heat can be deadly, especially to your costly computer components. This week we are going to talk about the basics of cooling your computer system and its components. 1. Why Is It So Hot In There? Computers are built out of many digital circuits. These circuits are constantly switching state; i.e., when doing calculations. Heat is a byproduct of these calculations. Computer chips, central processing units (CPUs), and graphic processing units (GPUs) are getting more powerful every day. With newer technology comes faster processing. Faster processing leads to more heat being generated. Without proper heat dissipation, your CPU [http://www.geeks.com/details.asp?invtid=ADA3700AEP5AR -N&cat=CPU] can be damaged beyond repair. 2. Going Down the (Heat) Sink The first line of defense in this war against heat is a heat sink. Its just a big chunk of metal that dissipates heat from your CPU and spreads it out across its surface. The idea is to spread the heat to a larger surface area and let the air pick up the heat and whisk it away. The surface area of a heat sink is created by many fins. The traditional heat sink is made of aluminum and has many parallel fins. Aluminum can be easily extruded with straight parallel fins. The problem with this design is that air can only move through the fins in one direction, creating a problem for air flow within a computer case. Newer heat sink designs are becoming far more intricate. Engineers are finding creative and aesthetically pleasing designs to help dissipate heat more efficiently.
3. The Secrets to Making Heat Sinks Work The majority of CPU coolers use a simple aluminum heat sink because it is inexpensive and a good conductor of heat. It is imperative that the heat sink be seated properly on the CPU. Any resistance to thermal transfer could allow the CPU to overheat. A copper heat spreader for that critical center part of the heat sink is a good choice to aid in CPU cooling. This is a flat piece of copper sandwiched between the top of the CPU and the bottom of the aluminum heat sink. The copper heat spreader helps to dissipate the heat from the CPU immediately to a larger surface area. Whether you decide to use a heat sink with a copper heat spreader or not, you will need to use thermal grease. [http://www.geeks.com/details.asp?invtid=TG-ST700&cat=CPU]. Because of its thermal conductivity and low resistance, thermal grease is essential to proper CPU cooling. Thermal grease needs to be applied between the CPU and the heat sink, (or copper heat spreader), in a very thin and even film. When applying the thermal grease, be careful not to apply too much. Clamp the heat sink down to the motherboard and then remove the heat sink. Check to see if any thermal grease has squeezed off the CPU and wipe off any excess. 4. From the Sink to the Air Now that we have moved the heat from the CPU to the heat sink, we face the problem of moving the heat from the surface of the heat sink to the air inside your computer case. If we just let it sit there, the air around the heat sink gets hot and wont let any more heat escape. The answer to that problem is simple a fan to blow the hot air away from the heat sink to let cooler air in. Almost all model CPUs require a CPU cooler [http://www.geeks.com/details.asp?invtid=CF460S1 3-N&cat=FAN] with a fan to operate safely. The fan mounted on top of the heat sink blows cool air onto the fins of the heat sink. Be certain to get a CPU cooler to match the type of CPU in your computer because the mounting configurations are different between brands and models. 5. Deadly Dust Bunnies We all know they are there, collecting quietly in the most inconspicuous places. Those dust bunnies just seem to collect everywhere! Unfortunately, they will collect on your fan blades and heat sink. If left unattended, they can build up and clog the fan motor resulting in
poor motor
operation and increased heat on your CPU. Regrettably, we cannot spray a cleaning agent and wipe away the dust. We all know that liquids and electronics dont mix! But how do we clean the dust out and maintain an efficient cooling system? Get a hold of a mini vacuum cleaner [http://www.geeks.com/details.asp?invtid=KBC-1B&cpc=SCH&srm=0] and simply suck those deadly dust bunnies away! I recommend you clean your system out regularly. While cleaning your CPU and heat sink, it wouldnt be a bad idea to clean the fan blades on your power supply too. 6. What About the GPU? The GPU on your video card is another hotspot. With the high demands on the GPU from 3-dimensional games, art programs and other software, your video card is susceptible to heat damage. Most newer video cards will have a heat sink and fan already installed, so be diligent and keep this clean as well. Another option to keep your video card running efficiently is to add heat sinks [http://www.geeks.com/details.asp?invtid=CK5000&cpc=SCH&srm=0] to the RAM modules. These will work in the same manner as the heat sink on your CPU and GPU. 7. Cool Casing Now that we have the heat removed from our CPU and our video card, how can we get rid of the hot air inside our computer case? One item you may want to consider is a thermal vent [http://www.geeks.com/details.asp?invtid=THERM ALHOOD-II&cpc=SCH&srm=0]. This works just like the venting on your dryer. You should also think about adding some case fans [http://www.geeks.com/products_sc.asp?cat=372] to purge that hot air out of your case. Proper cooling is essential to a long computer life. But how do you know that your fans are doing their jobs? Adding a case fan controller [http://www.geeks.com/details.asp?invtid=CW-SV&cpc=SCH&srm=0] is a great start. Now you will be able to keep track of specific temperatures as well as control the fan speeds on your primary components.
One often overlooked hotspot in a computer is the front of the case where your hard drive is installed. A great way to keep your hard drives running cooler is to install a hard drive cooler. [http://www.geeks.com/details.asp?invtid=SILDUALHDDFAN&cpc=SCH&srm=0]. This twin fan cooler mounts to your hard drive and is easily powered by connecting to your power supply. Most cases will have a fan mount option for a 3-inch fan at the bottom of the front panel of the case. This is a great place to add proper air circulation within your case and will help to cool your hard drives also. Final Words Our computers have cost us our hard-earned dollars. We save up to buy the very best and we expect reliable performance. But, just like a car, a computer requires proper maintenance. Keep them cool and clean and theyll last you a lifetime. Ignore them and you they will start costing you more in the long run. Be cool and keep it cool!
Tech Tip 52 Computer Cooling Tips, Part 2 Article by Roy Davis Last week we went over the basics of cooling your computer and covered the essentials of keeping your system running cool. This week, we will go over some key features to look for to keep your components running cool. 1. The Geek Explanation of Where the Heat Comes From Almost all digital computer circuits are built out of MOS transistors. MOS transistors are tiny electronic switches that pull the logic gate output up for a logical one and down for a logical zero. All the operations of your computer are done with ones and zeros lots of them. These MOS transistors are very efficient when they are holding a logic state, one or zero. When the circuit changes state from zero to one or from one to zero, it draws a spike of electrical current that is then dissipated as heat. So think about all the circuits in a modern microprocessor or CPU. Standard 32 Bit microprocessors [http://www.geeks.com/details.asp?invtid=P42800B478N&cat=CPU] might have 42 million transistors. 64 Bit CPUs [http://www.geeks.com/details.asp?invtid=ADA2800AEP4AX-NB&cat=CPU] can have over a hundred million transistors, while Dual Core microprocessors [http://www.geeks.com/details.asp?invtid=ADA3000DAA4BP-NB&cat=CPU] have twice that. The first PC hit the market with a clock speed of about 1 Megahertz. That means that the signal that controls the electronic switches allowed them to change state at up to 1 million times per second. Now, we are seeing microprocessors with clock rates at 2,000, 3,000 and even 4,000 times faster. A chip with a 3.2 GHz rating runs at 3.2 billion cycles per second. That generates a lot of heat! 2. Copper Isnt Just for Pennies
One of the keys to high performance heat sinking is the use of copper in the critical areas to ensure the maximum transfer of heat. The measure of heat transfer is W/M-K or Watts per meter-degree. Most heat sinks are made of aluminum with a transfer characteristic of 237 W/M-K. Copper, the base metal in pennies, is considerably better at 401 W/M-K. Only silver is better than copper at 429 W/M-K, but the improvement does not warrant the extra cost of pure silver. The best traditional style CPU coolers [http://www.geeks.com/details.asp?invtid=CF481B8&cat=FAN] use a copper core heat spreader to move the heat from the lid of the CPU to the base of the aluminum heat sink. 3. Sleeve Bearing Verses Ball Bearing Fans Many computer fans spin on sleeve bearings. Sleeve bearing fans are the cheapest to manufacture because it is just a steel shaft turning against a block of brass or other soft metal lubricated with oil. Sleeve bearings can work well in many applications, but the slightest wear can allow the fan blades to wobble, making them inefficient for your precious CPU. High-quality fans turn on ball bearings. Hardened steel balls allow the shaft to turn freely without wobble and without excessive wear. The free turning gives the fan the ability to move more air from the same electrical input. Because they dont wobble, CPU coolers or case fans equipped with ball bearings tend to be quieter too. 4. Peltier Junction Coolers Whats some French guy with a funny name got to do with cooling off your CPU? A lot of people thought the thermoelectric cooler invented by a watch maker in 1834 would be the most hi-tech way to pull the heat out of a hot microprocessor chip. Over a hundred years ago, they were freezing drops of water with Peltier Junctions merely by passing a current through the device. Sounds like the ideal cooling gadget, right? Lots of high priced CPU coolers were sold on this premise. The problem is that the Peltier Junction cooler just removes heat from one side of the device to the other. That means you can have a cold face to put on your CPU package (good), but the other face has the heat from the CPU as well as the heat generated by the current to run the Peltier device (very bad). You would need a massive heat sink to cool the cooler! 5. Satellite Technology Under Your Desk?
When satellite designers were faced with the problem of removing heat from the side of the spacecraft facing the sun and moving it to the shadow side where it can be radiated off into space, they were kind of stuck. They couldnt use a massive aluminum or copper heat conductor because the weight of the satellite had to be minimized. The solution they came up with was the heat pipe. A heat pipe is literally a piece of pipe with the ends sealed off. Inside there is a fluid that evaporates when heated. The evaporation quickly cools the end of the heat pipe where the heat is applied. The vapor then condenses on the cooler end of the heat pipe causing that end to warm up. A wick moves the condensed liquid back to the hot end where it starts the cycle all over. CPU coolers [http://www.geeks.com/details.asp?invtid=CF450B0&cat=FAN] based on heat pipes can move more heat to a wider area than any solid metal CPU cooler. The heat sink can be shaped to better take advantage of the cooling air from the fan. Overall, a CPU cooler outfitted with heat pipes can be smaller, lighter and more efficient than standard types. 6. A Cooling Tower for Your Computer? Those round towers outside atomic power plants are a symbol of the nuclear age. They seem sinister with their strange narrow waist and the white plume billowing out the top. But what most people dont realize is that those huge cooling towers are just a way to convert hot water into cool water to remove heat from the reactors. The white cloud over it is really just water vapor from the cooling process. If liquid cooling is powerful enough for atomic reactors that have to dissipate the waste heat from generating billions of watts of electricity, why not use liquid cooling on your CPU. [http://www.geeks.com/details.asp?invtid=EC-WC-201] The small heat exchanger sits on top of the CPU and pulls the heat into the liquid. The hot liquid flows through a hose to the external heat exchanger where cool air is blown by a fan over the coils containing the hot liquid. The cooled liquid is then pumped back through another hose to start the cycle again, not unlike a car radiator. Liquid cooling provides a super-efficient way to remove dangerous heat out of a tight spot in your computer where blowing adequate cooling air is a problem. 7. Hot-Rodding Your Case Fan The best scenario for proper system cooling is having a well-designed computer case.
[http://www.geeks.com/details.asp?invtid=XK-TA1&cat=CAS] Newer computer cases are designed with ventilation in mind. Most cases will have at least four spots inside the case to mount fans (if they dont come with fans already). The fans [http://www.geeks.com/products_sc.asp?cat=372] included with most cases should be upgraded for better cooling. Some case fans have LED lights that will give your computer case a makeover. If noise is a concern, even after using the ball bearing fans, an insulation kit [http://www.geeks.com/details.asp?invtid=NOISEB USTER&cat=CAS] is an ideal way to eliminate the pesky rattle heard from fans. Final Words Our PCs have come a long way from a simple plastic microprocessor plugged into a socket to a sophisticated ceramic or metal clad CPU hiding under of pile of exotic cooling gadgets. The processor on your video card probably has more computing capability and more memory than your last computer. It will need its share of cooling capacity, so dont buy the cheapest CPU cooler on the shelf and hope for the best. A good CPU cooler and an extra case fan can be the ticket to a long and uneventful life for your computer. If you are pushing the limits of computing performance, then you will need the highest performance coolers.
Tech Tip 53-Tips to Stop Phishing for Spyware and Spam By Stewart S. Miller Spyware is the means through which hackers gain access to your computer and your private information. Spyware is defined as any software that covertly gathers user information through your Internet connection without your knowledge, usually for advertising purposes. It watches everything you do on the Internet and sends that information, including private e-mail, passwords, and credit card numbers to the hacker invisibly, without your knowledge. No matter how careful you are, regardless of what virus protection you buy, you will always be at risk without the proper anti-spyware tools to protect you. How do you know if you have been infected? If the Start page in your Web browser keeps changing by itself, if your computer starts crashing more often than usual, or if you have tried to uninstall unfamiliar programs only to find they are still there after you restart your computer, then you are infected. Spyware can be pretty malicious.
Keyloggers watch your every keystroke and mouse click, then records your passwords, log-ons, and account numbers. You might think you dont need to read this column because youve taken steps to protect yourself. Well, if you have all the most current antivirus software, have installed Service Pack 2 for Windows XP, and have a very powerful firewall to protect you, then you would be WRONG! The fact is that all of these items do absolutely nothing to protect your computer from spyware at all, leaving you completely vulnerable to attack. Also, you know all those updates that Microsoft Windows XP installs? None of them protect you from spyware writers, who exploit ways to transparently install spyware through your Internet Explorer browser. These programs can even prevent Service Pack 2 from installing correctly. Once these programs infect you, your computer becomes very slow, because all your computer processing power is eaten up by the spyware itself. Dont allow yourself to be lulled into a false sense of security from any one anti-spyware program that claims to provide total protection - it doesnt exist. Spam is the most virulent form of abuse that any Internet user must endure. The problem is so common that most people find they are forced to change their e-mail address just to avoid getting junk email. Unfortunately, changing your e-mail is worse than changing your phone number because nobody knows how to contact you.
Microsoft Outlook 2003 and Eudora 6 are two of the major programs that have the ability to filter incoming e-mail as messages are received. If a message is believed to be spam, the message is filtered to the spam folder for later review. Many mail servers running on UNIX machines run a program called Spam Assassin (www.spamassassin.org) which separates messages that contain potentially unsafe attachments, match keywords representing spam or rejects messages from known spamming addresses.
Internet phishing (pronounced fishing) is when a hacker sends you an e-mail falsely claiming to be an established legitimate enterprise. The idea is to try to scam you into surrendering private information that will be used to steal your identity. This e-mail asks you to visit a Web site where you are asked to update your personal information, such as passwords, credit card numbers, Social Security number, and bank account numbers information that the legitimate organization already has. The scam is that this Web site is bogus and is set up only to steal your confidential information. You must be careful whenever you receive an email from what appears to be a trusted company. Hackers are very good at writing convincing letters that appear to be genuine. You must never ever click on a link in one of these e-mails, because even though it might look authentic, it almost always is not. It is very simple matter for a hyperlink to show one Web site and send you somewhere completely different when you click on it. These links are designed to take you to the hackers site. Dont even cut and paste these links into your browser, because the hidden information in the URL takes you directly to the hacker instead of where you intended to go. When you need to go to a Web site, open a new browser window and type in the address by hand. Thats the only way you can be sure. So, if you somehow find yourself on a Web site and you just arent certain if it is from the hacker or not, what can you do? Well, here is a good tip. If the site asks you for personal information, just type in any random set of information. If the site says you have entered invalid information, then at least you have a good clue that it is most likely authentic. However, if the Web site lets you type in any random information and comes back to tell your information has been updated, then the site is almost certainly from a hacker designed to capture anyones information (no matter what they type). Another telltale sign of phishing is when e-mails are not addressed to you specifically by name but instead say, Dear Customer. If an email doesnt take the time to address you by name, something is
wrong! When you receive an e-mail, ask yourself, Why am I receiving this note? If you are unsure, call the company directly and ask. Never assume an e-mail is authentic just because it looks like it came from a trusted company. Hackers easily spoof the from field of an e-mail to make it appear it is a legitimate correspondence. Never click on an attachment contained in an e-mail, because you never know what virus or spyware is lurking beneath the surface waiting to steal your private information and send it to the hacker world. It is important when you go onto a Web site to make certain the page begins with https:// That S means secure, and, if it is not there, anything you input can be intercepted by a hacker. One of the nasty tricks hackers use when trying to redirect you to a fraudulent site is to mimic the URL of the trusted site. For example, you might want to go to mycard.citibank.com, but the hacker site might say something like mycard.citibank. com@216.45.54.303 where that @ symbol means you are connected to a hackers Web site pretending to be your credit card.
Hackers are very good at what they do. Sometimes you can take every precaution and still find yourself in trouble, not knowing if you are giving your information to a hacker. The best protection is your own vigilance. Dont click, dont open unless you feel confident about the sender.
Tech Tip 54 - Microprocessor History (Part 1, The Basics) Article by Roy Davis The microprocessor, or CPU, as some people call it, is the brains of our personal computer. Im getting into this history lesson not because Im a history buff (though computers do have a wonderfully interesting past), but to go through the development step-by-step to explain how they work. Well, not everything about how they work, but enough to understand the importance of the latest features and what they do for you. Its going to take more than one article to dig into the inner secrets of microprocessors. I hope its an interesting read for you and helps you recognize computer buzzwords when youre making your next computer purchase. 1. Where Did CPUs Come From? When the 1970s dawned, computers were still monster machines hidden in air-conditioned rooms and attended to by technicians in white lab coats. One component of a mainframe computer, as they were known, was the CPU, or Central Processing Unit. This was a steel cabinet bigger than a refrigerator full of circuit boards crowded with transistors. Computers had only recently been converted from vacuum tubes to transistors and only the very latest machines used primitive integrated circuits where a few transistors were gathered in one
package. That means the CPU was a big pile of equipment. The thought that the CPU could be reduced to a chip of silicon the size of your fingernail was the stuff of science fiction. 2. How Does a CPU Work? In the 40s, mathematicians John Von Neumann, J. Presper Eckert and John Mauchly came up with the concept of the stored instruction digital computer. Before then, computers were programmed by rewiring their circuits to perform a certain calculation over and over. By having a memory and storing a set of instructions that can be performed over and over, as well as logic to vary the path of instruction, execution programmable computers were possible.
The component of the computer that fetches the instructions and data from the memory and carries out the instructions in the form of data manipulation and numerical calculations is called the CPU. Its central because all the memory and the input/output devices must connect to the CPU, so its only natural to keep the cables short to put the CPU in the middle. It does all the instruction execution and number calculations so its called the Processing Unit. The CPU has a program counter that points to the next instruction to be executed. It goes through a cycle where it retrieves, from memory, the instructions in the program counter. It then retrieves the required data from memory, performs the calculation indicated by the instruction and stores the result. The program counter is incremented to point to the next instruction and the cycle starts all over.
3. The First Microprocessor In 1971 when the heavy iron mainframe computers still ruled, a small Silicon Valley company was contracted to design an integrated circuit for a business calculator for Busicom. Instead of hardwired calculations like other calculator chips of the day, this one was designed as a tiny CPU that could be programmed to perform almost any calculation. The expensive and time-consuming work of designing a custom wired chip was replaced by the flexible 4004 microprocessor and the instructions stored in a separate ROM (Read Only Memory) chip. A new calculator with entirely new features can be created simply by programming a new ROM chip. The company that started this revolution was Intel Corporation. The concept of a general purpose CPU chip grew up to be the microprocessor that is the heart of your powerful PC. 4. 4 Bits Isnt Enough The original 4004 microprocessor chip handled data in four bit chunks. Four bits gives you sixteen possible numbers, enough to handle standard decimal arithmetic for a calculator. If it were only the size of the numbers we calculate with, we might still be using four bit microprocessors. The problem is that there is another form of calculation a stored instruction computer needs to do. That is it has to figure out where in memory instructions are. In other words, it has to calculate memory locations to process program branch instructions or to index into tables of data. Like I said, four bits only gets you sixteen possibilities and even the 4004 needed to address 640 bytes of memory to handle calculator functions. Modern microprocessor chips like the Intel Pentium 4 [http://www.geeks.com/details.asp?invtid=P42800C478&cat=CPU] can address 18,446,744,073,709,551,616 bytes of memory, though the motherboard is limited to less than this total. This led to the push for more bits in our microprocessors. We are now on the fence between 32 bit microprocessors and 64 bit monsters like the AMD Athlon 64 [http://www.geeks.com/details.asp?invtid=ADA3500DIK4BI-N&cat=CPU].
5. The First Step Up, 8 Bits With a total memory address space of 640 bytes, the Intel 4004 chip was not the first microprocessor to be the starting point for a personal computer. In 1972, Intel delivered the 8008, a scaled up 4004. The 8008 was the first of many 8- bit microprocessors to fuel the home computer revolution. It was limited to only 16 Kilobytes of address space, but in those days no one could afford that much RAM. Two years later, Intel introduced the 8080 microprocessor with 64 Kilobytes of memory space and increased the rate of execution by a factor of ten over the 8008. About this time, Motorola brought out the 6800 with similar performance. The 8080 became the core of serious microcomputers that led to the Intel 8088 used in the IBM PC, while the 6800 family headed in the direction of the Apple II personal computer. 6. 16 Bits Enables the IBM PC By the late 70s, the personal computer was bursting at the seams of the 8 bit microprocessor performance. In 1979, Intel delivered the 8088 and IBM engineers used it for the first PC. The combination of the new 16 bit microprocessor and the name IBM shifted the personal computer from a techie toy in the garage to a mainstream business tool. The major advantage of the 8086 was up to 1 Megabyte of memory addressing. Now, large spreadsheets or large documents could be read in from the disk and held in RAM memory for fast access and manipulation. These days, its not uncommon to have a thousand times more than that in a single 1 Gigabyte RAM Module [http://www.geeks.com/details.asp?invtid=1024DDR3200-N&cat=RAM], but back in that time it put the IBM PC in the same league with minicomputers the size of a refrigerator. 7. Cache RAM, Catching Up With the CPU Well have to continue the march through the lineup of microprocessors in the next installment to make way for the first of the enhancements that you should understand. With memory space expanding and the speed of microprocessor cores going ever faster, there was a problem of the memory keeping up.
Large low-powered memories cannot go as fast as smaller higher power RAM chips. To keep the fastest CPUs running full speed, microprocessor engineers started inserting a few of the fast and small memories between the main large RAM and the microprocessor. The purpose of this smaller memory is to hold instructions that get repeatedly executed or data that is accessed often. This smaller memory is called cache RAM and allows the microprocessor to execute at full speed. Naturally, the larger the cache RAM the higher percentage of cache hits and the microprocessor can continue running full speed. When the program execution leads to instructions not in the cache, then the instructions need to be fetched from the main memory and the microprocessor has to stop and wait. 8. Cache Grows Up The idea of cache RAM has grown along with the size and complexity of microprocessor chips. A high-end Pentium 4 [http://www.geeks.com/details.asp?invtid=BX8054 7PG3000F-DT&cat=CPU] has 2 Megabytes of cache RAM built into the chip. Thats more than twice the entire memory address space of the original 8088 chip used in the first PC and clones. Putting the cache right on the microprocessor itself removes the slowdown of the wires between chips. You know you are going fast when the speed of light for a few inches makes a difference! 9. Cache Splits Up As I mentioned above, smaller memories can be addressed faster. Even the physical size of a large memory can slow it down. Microprocessor engineers decided to give the cache memory a cache. Now we have what is known as L1 and L2 cache for level one and level two. The larger and slower cache is L2 and is the usual size quoted in specifications for cache capacity. A few really high-end chips like the Intel Itanium II had three levels of cache RAM. Beware that the sheer size of cache RAM or the number of layers are not good indications of cache performance. Different microprocessor architectures
between Intel and AMD make it especially hard to compare their cache specifications. Just like Intels super high clock rates dont translate into proportionately more performance, doubling of cache size certainly doesnt double the performance of a microprocessor. Benchmark tests are not perfect, but are a better indicator of microprocessor speed than clock rate or cache size specifications. Final Words I hope you enjoyed this first installment of the history of microprocessors. Its nice to know the humble beginnings and compare them to how far we have come in the computing capability of a CPU. Understanding the basics of how a microprocessor works gives you a leg-up on grooking the more advanced features of todays Mega-microprocessors. In future installments, we are going to dig into such microprocessor enhancements as super-scalar, hyper-threading and dual core. The concepts arent that hard and in the end you can boast about the latest features of your new computer with confidence.
Tech Tip 55 - Microprocessor History (Part 2, More Than a Toy) Article by Roy Davis In the last installment we talked about the very early days of microprocessor history and how it grew out of a programmable calculator into the first personal computers. The history is just the foundation for understanding the features that make our current day microprocessors so powerful and helps to illustrate what is really important in your next computer purchase. There have been many players in the microprocessor story, but the two companies that have had the most impact are also the two that dominate the market; Intel and AMD. Along the way, other big companies like IBM and Motorola made major contributions but have fallen by the wayside in the marketplace for personal computers. 1. Expanding Address Space We started with the Intel 4004 and how that first device could handle only 4-bit numbers and up to 640 bytes of memory address space. Nowadays, its common to plug in hundreds of megabytes in a single Memory Stick [http://www.geeks.com/details.asp?invtid=1024DD R3200-N&cat=RAM]. Running the latest operating system like Microsoft Windows XP requires quite a bit of memory. Working on digital photographs or running the newest computer games requires even more. Its not uncommon to have a Gigabyte (1,000 Megabytes) of RAM in your computer. 2. Finding the Way How does the microprocessor keep track of all that memory? To answer that, we need to talk about some of the components of a microprocessor. Last week, I
mentioned the program counter. The program counter is a register. Your microprocessor is full of these registers; little chucks of hardware that are like a sticky note you would use to remind you of something. Registers are like this for the microprocessor, easily and quickly referenced by many of the instructions the computer executes. A microprocessor will have several of these registers dedicated to pointing to memory addresses. The program counter indicates the next instruction, while the index register is used to automatically step through tables of data. A stack register keeps track of memory addresses to return from program subroutines. The memory addressing registers of an 8-bit microprocessor are almost always twice as big at 16-bits. When personal computers shifted to 16-bit microprocessors, the registers grew to 32-bits. That meant there were a lot of specialized instructions and multiple steps to manipulate the memory addresses. This slowed down the execution of programs. It became obvious that a microprocessor that can handle a full memory address in one chunk would run faster. Thats why we are using 32-bit microprocessors even in our low-end systems [http://www.geeks.com/details.asp?invtid=PX743AAR&cat=SYS], and the big boys [http://www.geeks.com/details.asp?invtid=PS578AAR&cat=SYS] are sporting 64-bits at a time. 3. Segmented Verses Unsegmented Early on, the struggle with handling large memory address space while maintaining microprocessor speed took two paths. Motorola took the simple approach by making the memory space flat so that it is addressed as one big continuous memory string. The instruction set was symmetric, meaning for every read operation there was a corresponding write. Intel took a more convoluted route. They broke the memory up into segments. Simple and fast addressing modes were used within the segment. The program instructions and data usually lived in different segments. When a program had to jump to another segment or retrieve data from a different segment, the segment registers had to be modified, which could take several steps. The idea was that the slow crossing of segment boundaries was more than made up for by the simple and fast operation within a segment. In the first IBM PC, the segments were 64 Kilobytes, which was pretty limiting then. Intel was quick to respond to this when they released the 286 microprocessor in 1982 by expanding the segments to 1 megabyte. The Intel 386 microprocessor took the segment size to 4 Gigabytes in 1985 and segment size hasnt been an issue since. This Intel segmented design became more popular than the un-segmented Motorola flat memory.
4. Memory Protection The one feature Intel offered that made it the microprocessor of choice was memory protection. When we had a simple operating system and ran one program at a time, memory protection wasnt an issue. Now that we are running multi-threaded OSs and have a dozen applications in memory simultaneously, there was a need for a more sophisticated memory addressing method than the simple flat model. In the flat model, when a program runs away it can mess up the code or data from other programs. This was difficult to debug because there was no indication of which application caused the problem. With the Intel segmented architecture, a program cannot delve into the memory space allotted to another application. If it tries, a memory fault message is generated and only that program crashes. You usually can recover from this situation without having to reboot and you know exactly which program was the culprit. 5. Multitasking The Intel 386 was a break-through microprocessor in another very important way. It had hardware and special instructions that supported true multitasking. Nowadays, we take it for granted that several applications can run simultaneously. In actuality, only one program is executed at any particular time. Your computer runs so fast that it can let each program run for just a short bit of time, then switch to the next program. Prior to the 386, in order for two pieces of software to be in memory and multitasking back and forth, the software had to be specially written to run for that short time. This scheme depended on the software to cooperate, a burden that rarely worked out in the real world. The 386 could stop a program in its tracks and suspend it while other programs ran. Then the OS would switch back to the first program as if nothing had ever stopped it. This is called preemptive multitasking and is an important concept to remember for later. Finally there was a window of opportunity for the Microsoft Windows operating system! 6. Multi Data The microprocessor was just now catching up to the features of the mainframe and minicomputers that came before it. In 1997, Intel introduced their MMX feature with the Pentium II microprocessor This is a trade name for Single Instruction, Multiple Data (SIMD) capability.
Playing video games on a personal computer was all the rage and even engineers wanted faster graphics performance for rendering 3D models. The Pentium class microprocessors could handle data in 64-bit chunks, which was great for heavy-duty scientific calculations, but graphics number crunching only needs 8-bit numbers and it needs it fast! A peculiarity of video processing is that the microprocessor has to do the same calculation over and over for all the pixels on the screen. The Intel engineers devised a way to reuse the 64-bit calculation hardware by splitting it up into smaller pieces; either two 32-bit, four 16-bit or eight 8-bit registers. The calculations were restricted to integers making it ideal for the simple graphics processing of the day. 7. Faster and Faster AMD countered the Intel Pentium II MMX with their 3Dnow! SIMD design. This expanded the SIMD concept with floating point calculations that extended the range of numbers that can be crunched. Only a year later, the Intel Pentium III [http://www.geeks.com/details.asp?invtid=PIII500S12&cat=CPU] came out with SSE, or Streaming SIMD Extensions that further improved on the MMX design with more flexibility with registers expanding to 128-bits. Note that SIMD is not multi-threaded or even multitasking. SIMD only works when special instructions are used by the programmer. The data has to be organized in a very strict format to fit into the SIMD registers. Even with all the restrictions, SIMD freed the PC as a graphics processing powerhouse that unseated the specialized graphics silicon from companies like Silicon Graphics. 8. Contrary to KISS In most cases, KISS (Keep It Simple Stupid) works. Simple is better than complicated, except in the case of microprocessors. One group of companies tried to keep to the KISS principle and designed Reduced Instruction Set Computer (RISC) microprocessors. They believed that by keeping the instructions very simple they could make the microprocessor perform faster that it would make up for having to use more of the simple instructions. Intel went the CISC (Complex Instruction Set Computer) route where they have many many instructions and lots of those could do some pretty complicated
things. What the RISC guys didnt count on was that Intel and AMD could fabricate their own microprocessors with the complexity of more than 100-million transistors and get that complex machine running at gigahertz rates. Scientists and engineers had favored the expensive RISC workstations for their believed superior speed performance over the PC. When it became obvious that RISC was a dead end, scientific and engineering software migrated to the PC with those inexpensive yet powerful CISC microprocessors. 9. Supercomputers On the Cheap
The supercomputer is the ultimate number crunching machine for physicists to work out the mysteries of the universe, meteorologists to predict the weather or for doctors to dissect human genes. They have to process lots of numbers over and over. Hey, wait a minute, thats just like the video processing for our video games. Well, guess what. Those supercomputers are built out of the same microprocessors that are the brains of our PCs! They just connect lots of them together to break the problem up into smaller chunks and process the data in parallel. Final Words That was some pretty dense technical stuff! I hope you caught the gist of it because well be using a lot of these terms in the next installment. All this talk about processing data in parallel is leading up to something. Well get around to
talking about hyper-threading and how even within a single microprocessor there are a lot of things going on in parallel. It will lead to other features like superscalar processing where more than one instruction can be executed during a single clock cycle. All this to make your programs run faster!
Tech Tip 56 - Keeping Windows Clean By Stewart S. Miller Windows is a living entity. Protecting Windows and keeping it clean from malware (malicious programs) is a full time job any more. Windows users are often frustrated by Microsofts continued efforts to make its operating system secure against hackers because patching Windows seems like a full-time job. The real question here is what do all these patches mean to you? Microsoft releases critical alerts on a regular basis designed to protect Windows from hacker attacks. The most severe vulnerabilities deal with security bugs that allow hackers to gain complete control over your computer. Some of these flaws exist in the way Windows Media Player and MSN Messenger process certain files. Microsoft has also identified bugs in how Exchange (its Internet Mail Server Software) and Office allow hackers to execute hostile code on vulnerable systems. These patches are supposed to prevent a hacker from gaining unauthorized access to certain sections of a Web site. Another bug in the Windows Shell Component may permit a hacker to cause an affected system to stop responding. These vulnerabilities make it possible for hackers to spy on your PC. With the advent of Service Pack 2, it seems like updates are a seamless process that simply execute in the background. The problem with this easy method of installation is that you, as the user, need to know what is being changed on your computer. This is why I recommend you always view the list of updates before allowing Windows to update your system.
Security Updates One Microsoft security patch update includes a change to the functionality of a clear-text authentication feature of Internet Explorer. This update removes the ability to handle user names and passwords in HTTP URLs, HTTP with Secure Sockets Layer (SSL), URLs, and HTTPS URLs. An example of the type of URL that is no longer supported would look like: http(s):// username:password@server.com If you think your version of Windows is too old to be affected by these security concerns, think again. Windows 98, Windows 98 SE, and Windows Millennium Edition are all critically affected by these security vulnerabilities too. If you are running Windows NT 4.0 Workstation SP6a or Windows 2000 Service Pack 2, update support ceased at the end of last year. Microsoft encourages those users to migrate to a supported version of Windows to prevent potential exposure to these security vulnerabilities. Protection Settings You can take steps to protect yourself from future attacks. Set your Internet and local intranet security zone settings to high so your computer will prompt you before running ActiveX controls and active scripting in these zones. Setting your browser security to high applies the highest level of protection from unsafe content that comes across your network. If this setting causes some of your sites not to load properly, you can add those sites individually to your list of trusted sites. However, you should only do so if you are sure that the site is safe to use and is hosted from a company or entity you trust. As a final note, there is a free program that I highly recommend you download called the Microsoft Baseline Security Analyzer (MBSA) tool (http://www.microsoft.com/technet/security/tools/mbsahome.mspx) that verifies when a security update has been applied to your system. It lets you scan your system for missing security updates as well as common security misconfigurations. Firewalls Once upon a time, a firewall was your best answer to protecting your computer from hackers looking to exploit vulnerabilities in Windows. Unfortunately, this isnt always the case now. Nowadays, most users are attacked by just browsing the Web. Hackers host Web sites that contain code to exploit vulnerabilities in your operating system such as infect you with a virus, spyware, or even take complete control of your computer. Hackers can alternatively compromise a Web site for the purpose of misdirecting you to click on malicious content. Hackers cant force you to visit a specific site, but they can trick you into clicking on a link that invites malicious content into your machine.
Windows XP SP2 has an integrated firewall, previously known as the Internet Connection Firewall (ICF) that defends you against hackers who are trying to access your computer from the Internet without your permission. When a hacker attempts to connect to your computer via an unsolicited request, the Windows firewall blocks that request. Windows will actually ask your permission if you wish to unblock and allow connects to programs you actually want to run such as instant messaging and multiplayer network games. When you unblock those connections, the Windows firewall creates an exception so that the firewall wont ask any more when your program needs to receive information to function. You dont have to use the Windows firewall. You can install and run any firewall you wish. Zone Alarm is an excellent firewall that is very popular. Zone Alarm offers both paid and free versions (http://www.zonealarm.com/) that can protect your computer as much or as little as you desire. An even more comprehensive program is Norton Internet Security 2005 (http://www.symantec.com/sabu/nis/nis_pe/) that touts its ability to hide your PC on the Internet so hackers cant find it. The Mac also has an integrated firewall, just like Windows. However, Norton also makes a comprehensive security solution for this platform as well in the form of Norton Internet Security 3.0 (http://www.symantec.com/sabu/nis/nis_mac/). The best part of Norton Internet Security 2005 for the PC is its integrated Intrusion Detection System that automatically blocks suspicious traffic. Not only does this product block suspicious incoming connections, but it lets you configure your outbound Internet connections too. This is advantageous, because if you do get infected with spyware, Norton will alert you that a program on your system is attempting to connect to the Internet and asks you if you really want this program to connect. By giving you the opportunity to block these connections, you can effectively thwart malicious spyware from doing its evil. Intrusion Detection Apples Macintosh has its own version of this type of Intrusion Detection with a program called, Little Snitch (http://www.obdev.at/products/littlesnitch/) that effectively asks your permission any time a program wants to connect to the Internet. Although the Mac seldom becomes infected with spyware, it is a handy utility to have so that you know exactly what your computer is doing on the Internet. Frozen Images Now that your computer has been through its trial by fire(wall), the best answer is to simply put your computer on ICE! If you have resigned yourself to the fact that, no matter what you do, your computer is going to get infected, then use a program called Deep Freeze (http://www.faronics.com/html/deepfreeze.asp). This software for both Mac and PC lets you configure your computer with all the
programs you need and then freeze your configuration. If a hacker infects your computer with a virus or spyware, Deep Freeze makes the damage simply disappear. All of your settings, files and programs are completely restored to their original configurations every time you restart your computer. This makes it possible for you to avoid problems caused by software conflicts, registry and operating system corruption, lost network and Internet connections, as well as a host of problems caused by simply connecting to virus-ridden network sites. The only catch is that you have to store your personal documents on a separate drive that does not revert each time you restart your machine. You have to imagine this program literally resets your computer to a frozen state that you specify. However, if you create a word document, it would be lost if it were on that drive. So, remember to keep a separate drive with your personal files and youll have a computer that wont ever become infected or go down. Now, all you have to worry about are mechanical failures. Conclusion Finding ways to prevent hackers from accessing Windows is difficult because your operating system is always in a constant state of flux. Every time you turn on your computer, browse the web, or get a Microsoft update, your operating system changes. If you want to prevent all changes from taking place on your computer, freeze the computerbut then you cant make any changes to your operating system at all. There are good and bad points to both approachesbut in a world where having a functional computer is a necessitythis Tech Tip will keep your system running.
Tech Tip 57 - Protecting Your Identity By: Stewart S. Miller Identity Theft No matter what you do online, there is always a risk that someone could glean enough private information about you to usurp your identity. Your financial credit affects nearly every facet of your life, so in order to maintain control over your information, the following tech tips are in order. There are several types of information that are appealing to thieves: 1) Credit card numbers 2) CW2 security numbers (those 3- or 4- digit codes on the back of your credit cards) 3) Credit reports 4) Social security numbers 5) Drivers license numbers 6) ATM cards 7) Telephone calling cards 8) Mortgage information 9) Dates of birth 10) Online passwords 11) PIN numbers 12) Home/business addresses 13) E-mail addresses 14) Phone numbers Compromised Accounts When any of account is compromised, close it immediately. E-mails can phish for information about you. If an e-mail sounds like it is from PayPal or your bank telling you there is a security concern, and you should click the embedded link to go the site to correct it, DONT! These links are often tailored to take you to look-alike Web sites designed to trick you into entering your personal
information directly into the malicious hackers computer. What you should do instead is open your Web browser and manually type in the link to the Web site you wish to visit to check on your account (dont ever cut and paste a link). This is the only way you can be reasonably certain you wont be misdirected to someone waiting to prey on your information. Sometimes it isnt even your fault. The security at some companies that have your personal information is lax and vulnerable to a malicious hacker attack. Low Tech Backups In any case where you suspect that your information has been stolen, you need to be prepared and to have organized your paper bank records for at least one year. You will need to prove your account balance to the financial institutions fraud department as soon as possible. Detailed steps to take if your ID is stolen can be found at the following links: The plan to follow if your ID is stolen: http://www.privacyrights.org/identity.htm When bad things happen to your good name (FTC document): http://www.ftc.gov/bcp/conline/pubs/credit/idtheft.htm U.S. Department of Justice ID theft kit: http://www.usdoj.gov/criminal/fraud/idtheft.html Identity Theft Resource Center: http://www.idtheftcenter.org/ Online Passwords The biggest Achilles heels are online passwords. To protect yourself, always use combinations of upper and lowercase characters (including symbols and numbers) so that hackers who concentrate on commonly used words in the dictionary wont guess it easily. Use longer words with more characters and combine two words together with a symbol. You may even want to use words from two different languages so that automated password guessing tools wont work. Computers arent the only way thieves can get your personal information. Telemarketers are often hardworking people, but there are those who are persistent for the wrong purposes. If someone calls you and hassles you to give them your personal information, dont! Even if they sound legit, you never know to whom you are talking to over the phone. Voice over IP Privacy The Bush Administration is asking the Federal Communications Commission (FCC) to order Net telephony providers to comply with a law that would permit police to wiretap conversations carried over the Internet. Unlike regular voice calls, where wiretaps would have to physically connect to the line, VoIP could be
tapped anywhere at anytime. The problem that forces us to sacrifice our privacy and rights stems from the FBIs belief that Internet telephone calls are a national security threat that must be countered with new police wiretapping rules. The way this would work is that the FBI would require broadband Internet providers to provide more efficient, standardized surveillance facilities, effectively changing the way Internet providers do business. The reasons for these changes are because a terrorist could potentially use VoIP to circumvent legitimate wiretaps from calls being placed over the Internet. If terrorists can evade lawful electronic surveillance though technology, it puts everyone at risk. The real trick is to find a new way in which to effectively trace Internet phone conversations. The federal government is funding the development of surveillance tools through scientific projects that would allow police to identify whether suspects have been using VoIP to communicate secretly. VoIP communications are hard to track. Think about the great expanse of the Internet where traffic can go literally anywhere. Vonage and ATT phone adapter boxes are portable and can be installed virtually anywhere in the world. You can take your box, plug it into the Internet halfway across the world and still receive calls on your local phone number. Anonymity If thats not enough, there are a number of services on the net that make your Internet traffic go through a special service that removes all tracing information, making you invisible or anonymous to the world. When such services are used, it becomes almost impossible to wiretap a call. The only way around this problem is to work with the VoIP providers directly by placing tracing information embedded within the VoIP call itself. In this way, if traffic is routed through an anonymous server, there is still a way to find out who the call is coming from/going to and trace the people on each end of the call. Privacy advocates, however, are
infuriated by the federal governments initiatives to have the ability to tap our VoIP calls at will. They see this as a direct attack on our privacy. VoIP providers are nonetheless working with the FBI and FCC to facilitate the approval of wiretapping requirements so that the Internet does not become a haven for secret communications between terrorists and spies. Conclusion Everything you do online can be tracked whether it is making a purchase through a website or calling someone using your internet phone adapteryou must be very careful not to give out personal information that could potentially be used against you. We live in a wired world, and finding anonymity amongst the digital media is difficult if not impossible. Keep records, burn information on CDs that are not readily accessible over your home network or local computer. If you are detailoriented about your personal information, you can save yourself many headaches later on.
Tech Tip 58 - The Evolution of the Laptop Take One Tablet PC and IM me in the morning By Stewart S. Miller If you are interested in expanding your ability to write with your PC, then a Tablet PC is for you. The freedom of leaving your keyboard and touchpad behind was an appealing one for meas I liked the idea of being able to fold my screen upright and use an electronic pen to capture my notes directly on my screen. Besides, I wanted to get away from having Carpal Tunnel Syndromeso using a pen instead of a mouse was easier for my wrist. Bridging the gap between the pen and the computer is achieved by allowing you to write notes directly onto your computer. However, these machines require a slightly modified version of Windows XP Pro called Windows XP Tablet PC Edition 2005 just to make things work. Features and Functionality The nicest functionality this device offers is that you can use it at home, the office, or even for school. Computers are difficult when it comes to design and creativity. If you dont see the value of using a Tablet PC, consider: Write on the Windows Desktop Literally handcraft your own greeting cards on the computer Annotate and make notes on Web site pages Instant message people in your own handwriting
Enter notes into Microsoft OneNote (a note taking application most often associated with features available on a Tablet PC) Convert handwriting to text Create personalized PowerPoint presentations Hand-sign documents or e-mails
Some users will find it difficult to write on the screen of your monitor or laptop because it is somewhat awkward to use a virtual pen in place of a real one. It does take some practice, but if you are looking for precision there is an included pen pad (mouse pad) that you can use as a writing surface that allows for clear and accurate writing. Imitation is not the highest form of flattery You have to be careful of Tablet PC imitators. Some computers offer a touch screen that works with a stylus (just like a PocketPC)but you dont want that! The reason is that your palm hangs down over the screen causing too many contact points making the mouse completely inaccurate. You have to hold the stylus in a perfectly upright positionwhich makes dragging across the screen far more difficult. A TabletPC, on the other hand, uses a special pen that communicates with the computer directly. Your hand does not cause any contact that would move the mouse across the screenonly the included pen can move items on the screen. One Note Microsoft OneNote is a great application especially for college and business users. This application interfaces directly with the pen capabilities of your TabletPC and allows you to take notes on the screen in your own handwriting. The program even has the capability to translate your handwriting into printbut like any optical recognition programcertain words are difficult to interpret. Dexterity There are simple dexterity issues that only a pen can offer when dealing with graphic applications such as Adobe Photoshop and Illustrator. While a mouse allows you to control the screen, it offers limited precision. A mouse is like trying
to thread a needle with a sledgehammer. A pen, however, allows you to control the movement (right on the screen) to introduce key movements and illustrations that are essential to your projects. Screen Rotation One of the most advantageous features of the TabletPC is to have instant screen rotation. You can literally rotate the screen so that you can input in portrait mode vs. the standard landscape mode. This means the computer evolves from being on your lap to becoming a tool you can hold in a business environment. You can use portrait mode for taking notes, just like you would when writing on a pad of paper. Landscape mode is best when viewing presentations, graphics, or charts. Is it worth it? So, the real questionis the few hundred dollars extra for the Tablet PC really worth it? After dealing with computers for several years, I have to say yes! There isnt really any other alternative device that allows you to write directly on your computer screen. While graphic tablets and high precision mice offer better controlthey will never equal the control you would get with a pen writing right where you need it.
Tech Tip 59 - Building Your Own External Hard Drive By Stewart S. Miller If you are like me, there is never enough storage to keep your all of your data intact. Many manufacturers offer external hard drives, but you pay a premium to buy them. If you want to save some money and get more storage space consider building your own device. By doing this, you can custom make your own device and its easier than you might think. Getting the Parts To build your own hard drive, you really just need to by an external enclosure kit. Companies such as A-Power make them very inexpensively. When choosing a box, you just need to consider how you want to connect it to your computer. The three options you have include: 1) USB 2) Firewire 3) SCSI The USB devices come in two flavors. USB 1.0 and 2.0 devices. The latter support greater transfer speeds of data between the device and your computer but dont waste your money unless your computer can support a USB 2.0 device. Only the newer computers over the past three years have been built with the capabilities of supporting higher speed USB devices. Another caveat you have to consider is that most USB hubs DONT support USB 2.0 speeds even if they are connected to a computer than can support it. What you need to do is check the computer and the hub to make certain that the vendors have listed that they both support USB 2.0 devices. There is a noticeable increase in speed, so it is most definitely worth your time.
SCSI devices are somewhat obsolete in most systems. I used to run my Macintosh and PC using a SCSI adapter card because that was a reliable method. Today, you dont see Macintosh computers with SCSI as a standard feature any more. Adaptec and SIIG are two common manufacturers who build PCI adapter cards for the PC and PCMCIA adapters for your laptop to permit SCSI communications. While this technology has its merits, it is not as common today as either the USB or Firewire equivalents. Like SCSI, several manufacturers build PCI and PCMCIA adapters that enable high speed data transfer through Firewire. Many hard drive enclosures support both USB and Firewiregiving you the option to connect any standard hard drive to your computer. The most common type of hard drive that fits into these enclosures is a standard 3.5 IDE unit. The enclosure is easily opened and includes two connections:
1) IDE Cable 2) Power Adapter It is really as simple as popping the drive right into the enclosure, plugging in the power cables, and connecting it to your PC. Considerations Remember that your new drive is still unformatted. Formatting your drive so that it's usable, however, is easy enough. When you first connect your new drive to your PC or Macintosh, the computer will recognize the device as new hard drive and ask you to format it. Assuming you are going to be connecting this drive to one or more PCs running the Windows XP Operating System, "NTFS" is the clear choice for the file system you should select for formatting. If you plan to move your new drive between PC and Macintosh systems, "FAT32" might make a better choice for a cross-platform device. Automated Backups There are a number of automated software backup tools that you can install to backup all of your data to these devices. If you are very worried about your drive, (especially if it is nearing its end of life), you can boot to a special program
CD that will allow you to backup your entire C Drive to this unit. If something ever happens that causes your primary drive to fail, all you have to do is take the hard drive out of your external enclosure and install it in your PC. Transporting Large Files Best of all, nothing beats the price, speed, and capacity of an external hard drive for transporting very large files. If you are someone who deals with very large Photoshop, digital video, music, or PowerPoint files, an external drive lets you quickly store your images and transport them to any other computer. This is not only faster than burning a CD, but provides greater capacity and can be much cheaper in the long run than other storage alternatives. Glossary Universal Serial Bus (USB), an external bus standard that supports data transfer rates of 12 Mbps. A single USB port can be used to connect up to 127 peripheral devices, such as mice, modems, and keyboards. USB also supports Plug-and-Play installation and hot plugging. Firewire, a very fast external bus standard that supports data transfer rates of up to 400Mbps (in 1394a) and 800Mbps (in 1394b). Products supporting the 1394 standard go under different names, depending on the company. Apple, which originally developed the technology, uses the trademarked name FireWire. Other companies use other names, such as i.link and Lynx, to describe their 1394 products. A single 1394 port can be used to connect up 63 external devices. In addition to its high speed, 1394 also supports isochronous data -- delivering data at a guaranteed rate. This makes it ideal for devices that need to transfer high levels of data in real-time, such as video devices. Similarly to USB, 1394 supports both Plug-and-Play and hot plugging, and also provides power to peripheral devices. Small Computer System Interface (SCSI), a parallel interface standard used by Apple Macintosh computers, PCs, and many UNIX systems for attaching peripheral devices to computers. Nearly all Apple Macintosh computers, excluding only the earliest Macs and the recent iMac, come with a SCSI port for attaching devices such as disk drives and printers. SCSI interfaces provide for faster data transmission rates (up to 80 megabytes per second) than standard serial and parallel ports. In addition, you can attach many devices to a single SCSI port, so that SCSI is really an I/O bus rather than simply an interface.
Tech Tip 60 - Microprocessor History (Part 3, Surfing the Pipeline) Article by Roy Davis Maybe Im showing my long history in California, but when I hear the word pipeline I think of a long wave breaking over its front forming a long pipe. The ultimate hotdogging trick is to surf inside the pipeline. Well, microprocessors grew up in California too. Both Intel and AMD are located in Silicon Valley, and in their products doin the pipeline is gnarly too. We are working our way through the history of microprocessors so we can understand what the latest new features are, what gives a performance boost, and what is just marketing hype. The pipeline is a fundamental feature of microprocessors and is the enabler for several other very important speed-up schemes. Lets see how pipelines work. 1. The CPU to Memory Interface The first thing to dig into is how your computer gets instructions and data out of the memory and puts data back. First, the CPU fetches an instruction. That instruction might require a chunk of data, or even two. That means a single instruction might take two or three read cycles to get the instruction and data into the microprocessor. As mentioned, the microprocessor outputs the address on the address bus, and then reads the instruction. If the instruction calls for data, one or more read cycles take place. All the while, the microprocessor is sitting there waiting for the instruction and data to show up. After the microprocessor gets all the pieces of the instruction and data, it goes to work. Some instructions may take a few steps, so the memory ends up waiting while the CPU works-lots of stop and go and hurry up and wait. Seems like a good way to slow things down, right? 2. Complicated Wiring
The main memory in your computer is made up of RAM (Random Access Memory) chips. The microprocessor outputs the address of the data on an address bus. This is a series of wires on the circuit board with one wire for each of the bits in the address. Even low-end microprocessors have 32 or more address lines, so you can see that buses are complex affairs. Then, there is the data bus with about the same number of wires. Thats 64 copper traces on the circuit board (the wires) between the CPU and the memory. Add to that a handful of control signals to be complete. A 64 bit microprocessor would have twice this number, about a hundred bus lines. It takes time to get all these bus lines moving. This is the biggest bottleneck to speeding up a computer. Everything has to work around the relatively slow speed of the instruction and data buses. 3. Systemic Process Though early microprocessors operated just like I outlined above, it was in the early days of mainframes that someone figured out a way to put everybody to work 100 percent of the time. Back in 1944, the Colossus Mark II was used by the British to break German codes. It introduced an innovation known as a systemic process; just like the systemic process between your mouth and the other end that is still busy processing your breakfast when you are eating dinner. The data went in one end of the Colossus, and before it came out, more data was put in. The only time the CPU had to wait was for the first instruction and data, and the only time the memory bus was idle was after sending off the last instruction of the program. 4. Indigestion The systemic process works very well for early mainframe computers because they had very simple instructions that were, well, regular. The instructions were the same size and so was the data so each stage in the systemic process took the same about of time and the whole thing worked like a well-oiled assembly line. Microprocessors started out as very simple devices without all this systemic process stuff, but then their instructions grew up very haphazardly. Some instructions were much longer than others and the long instruction could take multiple memory read cycles to fetch. Then, the size and number of the data varied. That made the evenly-paced systemic process break down. 5. Prefetch to Get Ahead of the Game
In many ways, the Intel 286 microprocessor was a break from the origins of a minimal CPU on a chip and led the way toward modern mainframe architectures. In 1982, the 286 introduced instruction prefetching to the PC. There is now a little buffer memory between the memory bus and the CPU. The memory bus would deliver instructions to the prefetch queue and if the CPU got bogged down with a complex instruction, the next instructions would just stack up in the prefetch queue. When the CPU ran into a string of simple instructions, it would draw down the prefetch queue. Either way both the memory bus and the CPU would run at full speed and not be held up by the other. 6. Pipelining Breaks the Logjam
It wasnt until 1989 when Intel brought out the 486 that PCs had a better way to deal with the memory bus bottleneck. The 486 had a pipeline. Cant you just hear the surf guitars? Sorry-back to the microprocessors. The concept was simple: take the systemic process and break it up into very small pieces so each step has to do only very simple tasks. By making more steps in the pipeline, the tasks are extremely simple and even complex instructions can be broken down and executed as quickly as simple ones. 7. Branch Prediction As long as the program runs along in a linear fashion, incrementing the instruction address, the pipeline is hunky-dory. What happens when it runs into a branch instruction? The next instruction address will depend on the outcome of the execution of the instruction. How will the pipeline know which instruction to fetch next? Most program branches are part of a loop, a section of code that repeats itself until a condition is met and then the program continues outside the loop. If a programmer takes the time to put in a loop, chances are its going to continue in the loop for several iterations. So, if the pipeline predicts that the program flow will continue in the loop, it can fetch the next instruction in the loop. If the branch prediction turns out to be wrong, the pipeline has to be flushed and part of the production line held up until the right instruction works its way down
the pipeline. Unlike the real world, these predictions are right the majority of the time, and thus a performance increase is realized. 8. Super Scalar If you have a pipeline with ten steps, then you can have ten instructions in the pipeline at once. If each step takes one clock cycle to complete, then you have a scalar CPU. If you double the clock speed, you get twice as many instructions executed. Some microprocessors have an amazing number of steps in their pipeline. Some Pentium 4s [http://www.geeks.com/details.asp?invtid=BX80547PG3000FDT&cat=CPU] have 31 steps. By breaking each instruction down into such small steps, it becomes possible to operate on some pieces of the instruction in parallel. In fact, they have gotten so good at it that some of the pieces can actually be processed out of order. As the parallelism goes up, so does the speed. This CPU usually has more than 31 instructions in the pipeline because some of them are taking parallel paths. That means more than one instruction per clock cycle is executed. Thats superscalar. Doubling the clock speed gets you more than twice the number of instructions executed. (This is not to be confused with actual parallel processing, which is the combining of two or more CPOS to execute a program). 9. Multithreading Remember previously in Tech Tip 55 [http://www.geeks.com/pix/techtips-DEC0105.htm] when we talked about multitasking where the computer appears to running more than one program at a time? What was really happening is that the operating system was time slicing, letting each program run for a short period and switching back and forth quickly so that it appears both are running simultaneously. Thats how the different applications share the CPU.
Taking the concept of multitasking and turning it inside out, what if we work on different parts of the same program at the same time? Its sort of like making a baby with nine women in one month. Well, on a computer it can be done!
What if we put in another pipeline? Doing things in parallel speeds things up, right? Whats really happening is the instructions are getting interleaved in the pipeline. The advantage is that while one pipeline is waiting for a memory fetch or something that holds up execution, the other pipeline can take advantage of the time. The problem is sorting out the bits of the program that can be run in parallel. It can be done, and as microprocessors get more complex, they can keep track of various threads of the program and put the results back together at a merge point. While complicated, it can and is done all the time in Pentium-class microprocessors. 10. Hyperthreading Finally, we are getting to the ultimate in CPU speedup, superthreading, or as Intel calls it, HyperThreading, or HT. Of course, AMD had to match the acronym, so they call it HyperTransport technology. Its all over their promotional literature. So, how does it work? If you have a single thread running, much of the CPU execution hardware is sitting idle because its prepared for all sorts of parallel processing of those big instructions. While executing a simple instruction, all the spare stuff is wasted. Running traditional multithreading doesnt help this situation because the two threads are interleaved. At any step in the pipeline, only one thread is really executing. By allowing two threads to intermix at each step, they can take advantage of slack hardware and get more done by using the facilities of the CPU more efficiently. The trick is to keep track of which thread is which through the pipeline even though the two are mixed. Final Words So, there you have it. Your program is sliced and diced and even stirred up as it makes its way through your microprocessor. Techniques like breaking the code up into threads that can be run separately then put back together works well to speed things up. The challenge is for the logic of the microprocessor to keep
track of which is which at all times and even to sort out instructions that get out of order as they wind through your CPU. The latest microprocessors from Intel and AMD [http://www.geeks.com/details.asp?invtid=ADA3200DAA4BP-NB&cat=CPU] have amazingly complex systems to work on your program code and process it faster than ever before. No matter if its the latest digital photo or video editing, or the newest 3-D realistic gaming you are doing, a new microprocessor can boost the performance beyond what you are used to. You can be the Big Kahuna on your beach with a single- chip microcomputer on your desk.
Tech Tip 61 - Blogging Basics I (How to Read Up on the Latest) Article by Roy Davis Web Log. Thats a pretty self-explanatory phrase. A log is contemporary (meaning written down as it is happening or shortly thereafter) account of something, be it an epic voyage, a college education or maybe just the daily life of someone who cant help writing down their thoughts and feelings. Put it on the Web and you have a Web Log or blog. But, blogs dont stop there: news, special events and even the latest deal on a computer can be had via a blog feed. 1. History Lesson As with most overused phrases, Web Log was contracted to Weblog. It wasnt long before someone had fun with the term and broke it down into a sentence, We blog. From then on to blog was a verb and a blog was a popular thing to have. Now that we have a name for it, what is it, and where did blogging come from? The urge to post your personal history far predates the Internet-based blog. Paintings on cave walls were the first and included every form of graphic depiction and writing to this day. Probably the first network published personal account with a large following was John Carmacks (http://en.wikipedia.org/wiki/John_Carmac k) journal. Carmack is the programmer who wrote the video game Doom and he embedded his log in his Finger file. A Finger file was sort of an electronic business card back in the days of mainframe computers. John took the Finger file to the extreme as he related his
story with continuing updates. He has since converted to a blog, as has much of the world. 1. Blogs Today Blogs have evolved way beyond personal journals. There are many uses for continually-updated Web pages. News and current events are obvious examples. But, it doesnt stop there. Political advocates from left to right have taken to blogs big-time. In fact, politically-motivated blogs have become a major source of news as every word of a politician from any extreme is analyzed and turned back against them. Another form of blog is the continually-updated commercial presence on the Web. Instead of a static Web page advertising the companys wares, a blog can bring the latest events, sales, and product updates to the attention of interested customers on a daily basis. Our e-mail inboxes are full of spam, and its difficult to tune e-mail spam filters to let in the information we want while still keeping out the dreck. Blogs that we select can give us access to our favorite sources and give us back control of our computers. 2. Exploring Blogs One way to enjoy reading blogs is to surf the Web looking for the ones that interest you. To get started, youll want to use a Blog Directory (http://www.blogcatalog.com/) or Blog Search (http://www.technorati.com/). Even Google has a special blog search function [http//www.blogsearch.google.com]. The directory is a listing of blog sites by category or geographic location. You can zero in on the sites that interest you or just surf around the categories to see what pops up. The blog search allows you to type in words or phrases and generates a custom list of blog sites to explore. 3. Keeping up Automatically The fascination with blogs stems from the frequent updates that operate somewhat like the serial short (cliffhanger) that preceded the main feature at the movie house of old. Often, the theater patrons came back the next week to see the serial no matter what feature was playing. There is something new almost every time you check a blog out; the problem is you have to go check them out to see if there is an update. Well, computer geeks being what they are, willing to work very hard to develop a time-saving
gimmick or gadget, they came up with the News Aggregator or Blog Reader. I have my reader set to go get updates every 30 minutes. 4. Scan the Headlines
So, every half hour the reader gets all the updates. The reader does not download all the blog entries when it updates, it only picks up the title and a sentence or two of description of the new article. I can quickly scan the titles for something of interest, and if the title stops my eyes, I can read the description before clicking on the item. The reader then opens a Web browser window with the blog entry in it. When I close the browser window, Im back at the list of titles and descriptions, ready to find the next exciting nugget of information. Often, the blog entry is really a teaser to draw readers in. Once the readers are viewing the blog entry, they are presented with links to other areas of the Web site. This is a good way to build traffic on your Web site with frequent updates to events as they unfold. 5. A Blog Reader for Beginners If you are just getting started at blogging, try BlogExpress (http://www.usablelabs.com/produ ctBlogExpress.html). Its really simple to install and the presentation of the headlines and descriptions is clean and easy to read. You can download BlogExpress from their Web site for free. They live off donations, so if you find that it works out for you, please help them out.
BlogExpress requires that you upgrade to the 1.1 version of the .Net Windows component before installing. There is a link right on the BlogExpress Web page, but it goes to a typical Microsoft Web page where I went around in circles trying to figure out how to download the 1.1 upgrade. Microsoft advised using the automatic Windows Update feature so I tried that. I finally figured out that I had to select the 1.1 .Net component from one of the menus on the sidebar to the left of the screen. 6. Give it a Test Drive After installation, you are presented with some default news feeds. Chances are you will want to weed out that list and add some blogs and news feeds of your own choosing. This is called subscribing, and BlogExpress couldnt make it easier. Go to the Web page of the blog you want to subscribe to and find the or XML . Click and drag the button button or link for RSS to the BlogExpress file tree in the position where you want it to show up. Click the Check button and you have subscribed. Click the Synchronize All icon and all the RSS feeds will be checked for new content. To review the feeds, click on the branch of the tree on the left window that attracts your attention. The right window will fill with the headlines and descriptions. If there is nothing in the right window, it means there have been no updates to that blog, so pick another one. Comb through the headlines until you find one youve just got to read and double click the headline. Clicking the headline will open up a browser window to the blog. This browser is part of BlogExpress, but it acts like a separate program. Its not Internet Explorer, though it looks very similar. Browse the page and possibly any links if that tickles your fancy. One fault I found is that the Go Back arrow (the one on the left) stays green even if you are on the originating page. In other words, you are already backed up as far as you can go. To get back to the main BlogExpress window, close the browser window or otherwise get it out of the way. 7. Dont Look Behind the Curtain
Though it seems like the smoke and mirrors in the Wizard in Oz, the technology behind blog readers and news aggregators is pretty simple. It uses much the same infrastructure as the rest of the Web that you are familiar with. The most popular protocol is RSS , which can stand for Rich Site Summary, Really Simple Syndication or several other interpretations, depending on whom you ask. RSS (a version of XML) comes in several revisions, 2.0 being current. . Many blog There is a second popular protocol called Atom readers cover all the variants of RSS and Atom, so it really doesnt matter much to the reader. The RSS file is pretty straightforward, with some meta tags to clue the blog reader in to the title, the description and the link for the full article. Its only text so its much simpler than the HTML codes you see in a regular Web page. Final Words So, there you have the lowdown on blogs and news feeds. Now, there is no excuse to be missing an important event or sale any more. Install your blog reader and let it run in the background while it gathers the headlines and article descriptions so you can peruse them any time you want. No need for you to have to go to all your favorite Web sites to keep informed. Next week, well talk about how you can set up a blog of your own and entertain the world with your thoughts. Its easier than you think. New this week! 12.Jan.06 http://www.techtipsblog.com/
Tech Tip 62 - Starting Your Own Blog (You Can Do It!) Article by Roy Davis Last time we talked about how to read a Web log, better known as a blog. Its basically a Web site that gets updated frequently, usually at least weekly, if not daily or more often. You can even be notified of the updates automatically so you dont have to visit the site to see if there is anything new. There are personal blogs, news blogs, opinion blogs and even the latest deal blogs. If you havent heard, Geeks.com is putting up a blog so youll never miss a limited time bargain again. 1. Who Would Want to Blog? As I mentioned last time, people who would have kept a journal of a trip, an education or even just daily life have replaced the fancy bound book of blank pages with a Web log. Maybe Im a full-blown exhibitionist because Ive got a Web site full of my photos and little stories about my life with my family. A real Web site takes a lot of commitment, including big bucks for Web authoring software and a monthly fee for the Web hosting. A blog can satisfy your desire to get your word out without spending hours building a Web site and with no software except your Web browser on your computer. In fact, you can run a blog with no computer at all, but Ill get into that in a future installment. 2. Where to Host Your Blog Finding a place to host your blog is easier than you think. Many Internet Service Providers (ISPs) include blog hosting as part of their service. Its become popular only recently, so check with your ISP to see if they have added this feature. If they dont support blogs, you might pester them about how
everyone else is doing it. They just might upgrade their service. Of course, some Web hosting services offer blog hosting for a fee. That seems to be a bit of overkill, since blogs are mostly text-based and dont put that much traffic load on host systems. If your ISP doesnt offer blog hosting , you can use one of the free blog hosts. There are several free services, so shop around for one that suits your needs. While there are a number to choose from, I decided to use Blogger.com for my blog. They are one of the big ones and they are free. They have lots of blogs on their site so before you jump in yourself, you might want to look around to see what others have done with their blogs. LiveJournal is also extremely popular. 3. How to Get Started OK, lets use Blogger.com as an example. Enter the URL www.blogger.com into the address bar of your browser. You could jump right into starting your own blog, but take a look at other blogs to get a sense of what a blog looks like in their system. I also recommend taking their tour by clicking the Take a Quick Tour button. That takes you through a few pages where they explain some of the features they offer, kind of like reading the outside of a box before buying. Its a lot easier to use the features if you know what they are! 4. Fill in the Blanks Now you are ready to get serious. Click on the Create Your Blog Now button and get ready to fill in the blanks. Most of it is pretty simple with questions like your user name and password that you can make up. The user name cannot have spaces or punctuation in it. I used my real name for my Display Name, but you can also use an alias if youd prefer to remain more anonymous. Dont forget to check the box to agree with the Terms of service. You cant go on without completing that step. I actually read most of the terms, just to be sure. 5. Make Up Your Title The next requirement is to create a catchy title for your blog. Remember, folks will be scanning long lists of blogs and if your blog title stands out, you are going to get more traffic. Im not the greatest at this advertising stuff so my blog is simply Roys Geek Place. On that same page, you get to make up your very own URL. I just pushed the words of my blog title
together and the full URL came out http://roysgeekplace.blogspot.com. If you are an advanced user, you can set up your blog to be hosted on your own server, but thats way too much trouble to deal with. I took advantage of their free hosting. For security purposes, you have to read a word with squiggly letters and type the letters into a box. This keeps automated bots from creating blogs and using up the resources these people have kindly made available. 6. Use a Template for Some Color
Blogging started out as a simple, text-only thing, usually with a typewriter-like font on a white background. No frills at all. Since most of us are used to staring at a colorful screen with fancy fonts and graphic renderings for borders, why not let your blog do the same? Many blog hosts offer templates to help give your blog some eye appeal. A template is sort of a blank form for you to fill out, but in this case it includes colors and positioning of the text on the screen. The easy way is to select one of the templates offered up as part of the blog creation. You can preview the templates that you see in thumbnail format. Just click on one you like to expand it. You can change the template later if you wish. If there is a template that you like except for one or two features, go for it and go back and change those features later. That will take some HTML coding, but it isnt that hard. Ill have to wait for a future installment to get into that. 7. Spice Your Blog Up with Photos Did you ever notice that almost all novels have a photo of the author somewhere on the front or back cover or maybe over the authors biography? People seem to be more comfortable when they know what the author looks like. Blogs are no different. Many bloggers post a photo of themselves on their blog. Its easy to get a headshot of yourself with a digital camera. Its best to get someone to take the photo for you, but you can do it with a tripod and the self-timer, or even just balance the
camera where it points at your face. Dont try to take a picture of yourself in a mirror-all youll get is a photo of the flash going off. Dont worry about trying to crop the photo close around your face. Do that with your photo editing software. While you are at it, knock down the resolution to something like 200 by 300. Thats plenty for a headshot on a blog page. In fact, make sure the photo fits in the blog hosts size restriction. I have to stay below 50 Kilobytes at my blog host site. Each blog host is different, but usually you can upload the photo from your computer or point the blog host to a copy of it on the Web and the blog host will steal it for your blog. Of course, there are those who dont want their own face and will snatch someone elses from their Web site; whatever floats your boat. 8. Group Blogs Like many people, my family is now spread out from the East coast to the West. We call each other on the telephone occasionally, but the majority of communication is via e-mail. That gets complicated because sometimes family members dont copy the original message, but only make comments so its hard to follow the ongoing exchange. Also, one member might rattle on about several subjects and the comments might come back as a separate message. Its hard to put the thoughts back together. I think we are going to jump on the blog bandwagon and set up an extended family blog. That way we can each interject comments and questions almost like a discussion around the dinner table. The idea of a group blog works for almost any group, including teams of people working on a common project separated by distance or even different shifts at work. Final Words Go ahead and start a blog! Its only going to take you a few minutes to get it going. You can add to it bit by bit as you have the time. Make sure you send the URL to your family and friends so they can take a peek. When you have something especially interesting to say or announce, you can send out an e-mail with the URL again so they can keep up with your exciting life. Blogging can be a lot of fun and a valuable communications resource. Simple text blogging is easy to do, but theres a lot more available if you want. In future installments, Ill get into blogging from your cell phone so you dont even need a computer to update your blog! After that, we can delve into tweaking HTML code to really make your blog special with links to your favorite Web sites and a flashy graphics layout.
Tech Tip 63 - Spice Up Your Blog With Simple Formatting Article by Roy Davis Last week, we launched into the world of blogging with a lesson in starting a blog, or Web log, from scratch. We even touched on inserting photos into your blog so the readers could see who the blogger really is. This time, we are going to go over some simple formatting tricks that anyone can and should use to make their blog site easier to read and more pleasant to look at. Then, for instant gratification well look at how to post to your blog without wading through the screens of your Web browser. You can simply e-mail a post to your blog. For the ultimate trick, you can send a blog post via text message from your cell phone. Check out my blog at: http://www.roysgeekplace.blogspot.com 1. Dont Look Like a Dummy One of the best ways to make a bad impression is to have misspellings in your blog. If you are handwriting a personal letter, you can cover up bad spelling with sloppy penmanship. With a Web log, your spelling is right out there in 12 point type, and when your fifth grade teacher sees your blog, she will know she wasted her year with you. She deserves better than that. Besides, there is no excuse for spelling errors as most editors, including the ones used to post to blogs, have a spell check feature. If you just take the time to use it, you can catch the mistyped word that you know perfectly well how to spell, as well as the words that you missed in the fifth grade spelling bee. 2. Go Back and Check After you are finished typing a post, go back and proofread it. I do a lot of writing and Im pretty good at it, but Im still amazed at the errors I find when I reread what Ive typed,
such as missing or duplicated words, grammatical and spelling errors. At least the spelling errors are can be addressed right away. If you have finished a post using the Compose window, just click on the icon with the check mark. It checks your spelling and allows you to correct it. It works much like the spelling checker in Word or any similar word processing software you may use. If you have already posted a piece and want to go back and clean it up, most blog sites will allow you to edit your old posts. Just click on the Edit button and you are back at the Compose window where you can run the spell checker. While you are at it, you can add any of the text improvements we are going to talk about next. 3. Fancy Fonts Fonts can really change the character of your blog text. Most people just use the default font and dont even think about it, but an easy switch of fonts can make your text stand out. At the top of the Compose window, there is a drop-down box with a selection of fonts, again just like Word. If you dont know which font will suit you, just try them until you are satisfied. Select some or all of the text and make the font change. Beware of using too many fonts together as this begins to look like a ransom note. One difference between using fonts in your blog and in a Word document is that you are limited to the fonts used on most computers that will view your blog. In a Word document, the fonts are carried along in the file so it shows up the same everywhere. When viewing a blog, the Web browser must draw on the locally available fonts and will have to substitute a font if it doesnt have the one called out by the blog page. The solution to this is to stick with the Web safe fonts offered in the Compose window because almost everyone has those fonts. Pick a sans serif font like Arial for a clean and modern look. Sans serif means the letters dont have those little bars at the ends of the lines. For a traditional style use a font such as Times New Roman font. That looks more like text in a book. To give the effect of dashing off a note on a typewriter, use a font with a fixed-width like Courier font. 4. Punch Up Your Text Even though the number of fonts in a blog limits you, there are other effects that can stress a point or attract attention. Select a word or phrase that you want to stand out and click on the icon with the big B for Bold. That will fatten up the lines in the letters and
make the text stand out on the page. For extra impact, you can even increase the font size for just that word, though that trick changes the line spacing and can look a bit sloppy. When quoting foreign words or titles of books or just because you like it, use italics. The letters will lean over and look special. Use the I icon for this type of effect. Then, there is underlining to attract attention of a phrase. The U icon makes a line under the words, just like when the teacher was grading your papers. If you get totally carried away, you can apply bold, underline and italics all at the same time. Your readers might think you are crazed if you do that, but hey-its your blog! 5. Make it Colorful The text effects we just talked about are pretty traditional and have been used in books for centuries. An effect that is more popular on computer screens and supermarket gossip rags is the use of colored text to punch up a phrase. Select the text just as you do for the traditional text effects and click the color icon. Then, you have to select from a palate of colors. Keep in mind the background of your blog page when you select colors. Blue text on a green background is hard to read. Green on blue is even worse. Also, keep in mind that colored text needs a little help. I like to use the bold effect to fatten up the letters that I color to make sure the color punches through the background. You are trying to emphasize that phrase anyway, so bold and color work well together. 6. E-Mail Posting
There are different methods for getting your words on your blog page. It's not hard to post to your blog via e-mail. Suppose I'm at my Mom's house where only a dial-up Internet connection is available and it's really slow going through all the Web pages to make a post via the Web interface. Mom has her e-mail client going so it's a snap to zip out an e-mail message that ends up on my blog.
In order to post to my blog via the Web, I had to first set up e-mail posting via the Web access. My blog site has a Dash Board that controls all the settings. One of the settings is the e-mail access. I filled in the blanks, which created a special e-mail address for my blog posts. I have to keep that special address secret so others can't mess with my blog. Now, I'm at the fringe of the Internet hanging by an e-mail-only thread. I just type in the special address. The Subject line becomes the title for the post. Then, I just type in the post itself. You can use plain text for your post if you want to keep it simple. Or, you can turn on the HTML feature of your e-mail client, if it has one, and add formatting like bold, fancy fonts and colored text. Make sure you have the settings of your blog Dash Board set to accept HTML messages. 7. Going Wireless If you have a cellular phone that can send text messages to regular e-mail addresses, then you have a blog-posting device in your hand. I just posted to my blog with my standard cell phone. There are drawbacks, but I was using the most basic device. First, the length of the blog post is severely limited when using SMS, or Short Message Service (also known as texting). I can usually squeeze three or four short sentences in a single message. Of course, you can send multiple messages all in a row and then go back on the Web interface and edit those posts together into one big entry. So, when you are standing at the end of the pier watching the sunset and just have to write something for people to read, you can whip out your cell phone, tap out a brief message, and post it on your blog. You can instantly update your blog to follow your every move or mood. Final Words
Weve covered some basic instructions for all these blog features. Available features vary from blog site to blog site, though there are more similarities than differences. By giving you some general ideas about how to spice up your blog, you can try to do it yourself and learn by trial and error. But, if you are like me and want to get it right the first time, read the Tutorial, the Help section and the FAQ (Frequently Asked Questions) available on your blog site. Thats how I found out about many of the features I didnt know blogs had. For the final installment in this series on blogs, I want to delve into HTML coding. That way you can modify the template that formats your blog page. You can make your page unlike any other in the visual appeal department for better or worse. That will be a bit more technical, but after all, we are Geeks!
Tech Tip 64 - Customize Your Blog With Some Easy HTML Article by Roy Davis We have been exploring how to read a blog, how to start your own blog, and how to make your blog more attractive with some simple formatting tricks. Im already getting comments on my blog about readers who are motivated to start their own blog. Blogs, or Web logs, are designed to be super-simple. Anyone can master a blog. But, that doesnt stop us from getting our hands dirty with customization. Now, its time to bring out the Geek in us and dig down to the guts of the thing the HTML code. Sure, we could drive a stock sedan straight off the showroom floor, but why do that if you can add some fancy wheels or a hood scoop to make your vehicle really stand out from the crowd? In this case, your vehicle is your blog and attracting attention with a snappy screen presence is the name of the game. Check out my personal blog that I just started for examples from this series: http://www.roysgeekplace.blogspot.com 1. Under Construction You have probably noticed that the color scheme on my blog is kinda funky and things are shifting around a bit. It used to be all green and now a lot of blue (my favorite color) is showing up. That's because I'm experimenting with tweaking the HTML template that defines things like the background color and the default font. The vast majority of bloggers never fool around with HTML code and their blogs look
just fine. You have a lot of control over the content and format of your posts, but the template is the only place you can control the background and the overall arrangement of content on your blog page. Since the purpose of this exercise in blogging is to learn all the nuts and bolts, I'm going to mess with all the details of the template and see what comes of it. The first thing I did was dump the olive green background in favor of medium slate blue. I used to have a bedroom with the walls painted that color and I find it very relaxing. Let's see what else I can customize. 2. Where Did HTML Come From? First, lets get our bearings. HTML, or Hyper Text Markup Language, is the code that makes Web pages something different. HTML codes set the font, color of the background, and control dozens of other attributes that differentiate one Web page from another. That same code is used by blogs, but templates are usually used to gloss over the whole HTML coding thing. To be fair, I usually whitewash right over the HTML coding on my Web page too. I get away with that by using a Web page authoring program like Microsoft FrontPage. You can edit the page using Word-like tools to set the text attributes. FrontPage then generates the HTML code to make your Web page look the same as what you designed. The drawbacks are that FrontPage is not free and you are tied to a single computer for adding to your Web site unless you pop for more copies of FrontPage. 3. Borrowing Color From Web Pages As I said, your blog page uses exactly the same coding language as a Web page, so we can steal some of the tools used for Web pages. The first thing to address is color. We need to set the color for the background, the fill, the text color, and even outlines for photos. The colors on your screen are made up of red dots, green dots
and blue dots. If you turn on all the dots to full brightness, it looks white to the eye. If you turn them all off, then naturally you get black. By varying the brightness of the three colored dots, you can generate what looks like millions of different colors. These colors are described by the individual brightness of the three primary colors. The value can vary from 0 (off) to 255 (full on). The simplest way to describe white is 255, 255, and 255 for full red, green and blue. Its more common to see the brightness numbers expressed as hexadecimal numbers. Hex is base 16 so the counting goes: 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F 0 in hex is also 0 in our normal decimal numbering system. 255 in decimal translates to FF in hex. So, white in hex is FFFFFF. This is called Hex 6 notation. Often, you dont need such precise control over color so shorthand called Hex 3 is used. The lower order digit of each color brightness number is thrown away. White then becomes FFF, which is really slightly off-white, but its close enough. If all this is too complicated to visualize, check out this Web page that has many examples of colors and various ways to express them: http://www.webreference.com/html/reference/color/propcolor.html I like to keep this Web page open while Im fiddling with my template so I can instantly visualize the color palate Im poking in. 4. Hacking the Code Finally, we get around to actually changing some HTML code. On Blogger.com, its easy to find your template. There is a tab on the page where you compose posts, so instead of writing things that people will read, click the Template tab and write things the computer will read the instructions for the format of your blog. Its pretty scary looking stuff when you first see it, but the beauty of working with an existing template is that you dont have to know very much at all to make some impressive changes in the way your blog looks. I wanted to change the text color from black to a dark blue. The original line of code was: color: #000;
Thats expressed in the Hex 3 format. Actually a single zero would do also, but its easier to visualize when each color is represented. I changed the text color to a rich dark blue by putting in some blue while leaving the red and green at zero. color: #008; 5. One Step At a Time You want to make only one change at a time, save the code and preview your blog to see what you changed. Then, you can decide if you want to keep the change or go back to the way it was. If you restrain yourself and carefully evaluate each change, you can learn HTML code without ever cracking a book! 6. A Trick for Experimenters When you are just learning, its nice to be able to experiment with new HTML commands. Sometimes, its hard to tell what happened and you want to remove the new code you just wrote. Or, you may just want to remove an existing command to see what it doesnt do. Instead of deleting the unwanted code, you can just comment it out. HTML, as well as most programming languages, allows you to surround sections of text with special characters and what is inside the special marks is ignored. The purpose is to allow comments to be inserted in the code so you can remember why it is the way it is. You can use this comment capability to turn things off. In HTML, the open comment characters are the forward slash and star. To close a comment, use the star and then the forward slash. Here an example: /* This text is ignored and will not affect your page */
Another trick I use all the time is to make a copy of an existing line of code and comment it out. I then fiddle with the copy of the line like this: /* text-align: left; */ text-align: center;
The original code called for the text to be aligned at the left margin. I wanted to try putting it in the center of the column so I made a copy and commented out the original. I guessed that the code word would be center, and I was right. If I am sure I want to keep the change, I can just delete the original line. If I want to revert to the old format, I can delete the new line, remove the comment characters, save, and Im right back where I started. 7. Call For Help In this short piece, I certainly dont have room to talk about all the ins and outs of HTML coding, but I wanted to get you started. You dont have to go to the bookstore and buy a
thick tome on HTML coding. The help you need is right under your mouse button finger. If you go to Blogger.com and click on the Help section, you will find more details on modifying your template than you can digest in a week. Im sure your blog site has similar help available. Pick a feature you want to change and look through their long list of template modifications that they have detailed out for you. Start with simple things like changing the color of the background. Youll be amazed how one line of code can make such a big difference in the appearance of your blog page. Go after the obvious things like the font and color of the title. Insert a custom graphic to truly make the page your own. Dont get too wild and remember that dark text on a light background is easier to read newspapers are still black text on white for a reason. Final Words I hope this piques your interest in learning more about how Web and blog pages are put together and that you put that learning to work making your blog different from the next guys. I dont live in a house that looks like the neighbors, so why should my blog? Be colorful. I am totally turned off by blogs that have tiny white text on an all black background. I run across way too many of them and usually just go right on to the next one. Black is just boring unless you have some other color graphic that just has to be set off with a dark background. Take it one step at a time and make copies of the original code so you can always back up to remove an oops. Also, remember that if you totally screw it up, you can apply a new stock template and be all the way back to where you began: no harm, no foul.
Tech Tip 65 - Geek Gadgets for Your Sweetie Article by Roy Davis Lets put aside hard-core blogging with HTML and talk of the intricacies of Pentiums and Athlons to talk about something more fun. Gadgets! All geeks like gadgets. Even Bill Gates has a multimillion-dollar house full of gadgets that he and his wife love to show off when they entertain. With Valentines Day coming soon, we dont have to spend that kind of money to show someone we care. And we dont have to stick to gadgets only a geek would love. There are plenty of goodies that a geek could give to someone who doesnt even like computers. It doesnt matter if its Valentines Day or the anniversary of your first kiss, a personal gift is something special. 1. Photos Close to the Heart The old fashioned way to get your loved one to carry your picture was to give him or her a locket to wear on a chain around their neck. The photo would be tiny. About all you could fit in would be a close-up of your face. A modern girl might not go for the chain around the neck thing, but always carries another kind of chain a key chain. How about combining a key chain with a digital photo viewer? Well, Geeks.com is right there with the latest gadget, the My Photo Digital Photo Frame Key Chain <http://www.geeks.com/details.asp?invtid=DIGITALPHOTOFRAME&cat=CON> and its not going to break the bank. This
gadget has a full one inch screen, like going from an old TV to a widescreen when compared to a locket. Also, the locket carried one or maybe two tiny photos. The Digital Photo Frame shows up to 26 digital photographs. Sit down at your computer with your photo editor and comb through your collection of digital pictures for those special ones that bring you closer together. Do some tight cropping to maximize the detail. Hook up the USB cable to your computer and download the photo files to the digital viewer. Now, you have a personalized gift that goes far beyond a box of chocolates. 2. Signpost to Your Heart If your loved one is young or young at heart, you can deliver your love note on something they could wear. Sure, you could buy a custom printed tee shirt with a message professing your love, but you only get one shot at it. Plus, your loved one wouldnt want to wear it more than about once a week. For geeks, there is a better solution, one that can be worn every day and still look hip. Its a Programmable Scrolling LED Chrome Belt Buckle <http://www.geeks.com/details.asp?invtid=LEDBELTBADGE&cat=GDT> that would look great with jeans or even a belted skirt. The bright blue LEDs make the message readable in any light. With six different messages available, you will have lots more to say. Each message can be up to 256 characters so you would have plenty of room for, Roses are red, Violets are blue, I got this buckle at Geeks, To show I love you. Use your geek skills to program in the messages with the on-board buttons.
3. Dreaming of You If you want to be more subtle than a bright, gleaming, scrolling sign, but show your love that you are dreaming of them and care about their health, how about a Mist of Dreams Table Lamp <http://www.geeks.com/details.asp ?invtid=SG-40B1W&cat=GDT> with relaxing blue LED illumination? The lamp sits on a desk or near a workspace and generates a mist of moisture that replaces the humidity robbed by indoor winter heating. The anions produced in the process promote health and it looks cool, too. Your sweetie will be left with their chin in their palm watching the mist boil out of the dream lamp and thinking of how special you are to think of such a gadget. 4. Your Burning Desire What is more romantic than an evening surrounded by the glow of candles? Maybe you cook a special meal, put some soft tunes to play on the computer sound system and turn out all the lights. Then, light up some candles and enjoy the warm atmosphere. Real candles can be a problem. Dorm rules may preclude them. Having lots of drapes and burning candles would not be a good idea. Geeks has a totally safe alternative, the LED Candle <http://www.geeks.com/details.asp?invtid=LED-CANDLE&cat=GDT> that includes its own plastic cup to spread the light. Since an LED generates the glow, there is no flame and almost no heat, so even if one of these gets knocked over, you dont have to worry about starting a fire, except maybe in your lovers heart. And to be a true geek, analyze the operating cost and you will find that the batteries to run the LED Candle are cheaper than burning real candles!
5. Give Something to Remember You By What if your loved one is past the wearing jeans with flashing messages stage, (say like a grandmother), but you still want to send a personalized message? How about making a video of you and your family to send off to grandma? She might not even have a computer to play it on, but no matter because you can get her a DVD player to plug into her television. Im sure grandmother has a library of old soap operas on video cassette, so you dont want to cut them off. How about a combination DVD player and VHS player in one unit? The Progressive Scan DVD+VCR Combo <http://www.geeks.com/details.asp?invtid=XBV443-R&cat=CON> is just the thing. This deck will even play music CDs including MP3 and WMA. If you are not up to directing a video production, put your still photos on a CD-R and grandma can watch them on her television using this gadget. 6. Thats Not All Thats Burning Then, to make the DVD itself, you need a DVD burner to install in your computer. The LG Double Layer 16X DVD+/-RW Drive with LightScribe <http://www.geeks.com/details.asp?invtid=BLKGWA-4166B-DO-N&cat=DVD> is the perfect addition to your computer to make personalized gifts. First, it can make single or dual layer DVDs so you can use inexpensive media for short subjects or capture full definition feature length movies if you want. The really cool feature of this gadget is that with LightScribe DVD Media, http://www.geeks.com/details.asp?invtid=95116DT&cat=DVD, you can produce professional looking DVD labels that are burned right into the top surface of the disc. After recording the video portion
of the DVD, you flip the disc over and put it back in the LightScribeenabled drive. With free software that you can download from the Web, you can wrap custom text around the hub of the DVD using any font on your computer. There are stock background images available, but for a truly personalized gift, use one of your own photos cropped to fit on the disk surface. The preview will help you adjust the image to fit just the way you want. 7. Take a Romantic Trip Im taking my wife to La Paz for Carnival this year. Its like Mardi Gras only more intimate and up-close in Baja California. Well have a seaside hotel right in the middle of the action for the price of a Motel 8 State-side. Of course, Geeks.com has all sorts of geeky gadgets that make travel more enjoyable. They didnt pass up on the basics: an Overland Travel Duffle Wheeler <http://www.geeks.com/details.asp?InvtId=631429&cat=CAR&cpc=CLR> so you can lug your stuff while still holding hands with your significant other. The Duffle Wheeler has lots of room in several zippered compartments for clothes, swimsuits or whatever you take on a romantic minivacation. The handle telescopes so you can walk comfortably upright and the Duffle Wheeler moves smoothly due to the in-line skate wheels. It has a mesh pocket on the outside so you can keep your MP3 player handy for the trip. The feature I like over a standard suitcase is that the duffle collapses so you can slip it under a bed for storage instead of taking up valuable closet space. 8. Share the Love Songs I mentioned taking an MP3 player along on your romantic trip. It wouldnt be very romantic with you sitting there blasting some rock tunes while your honey listens to the lout sitting in the next seat. Share your tunes with a Logitech Playgear Share Audio Splitter <http://www.geeks.com/details.asp?invtid=980378-0403-
DT&cat=CBL> that is compatible with most digital music players with a 1/8 inch (3.5 mm) jack. Of course, to do that you need a pair of headphones, so why not get a matching set of Altec Lansing AHP 512 Headphones <http://www.geeks.com/details.asp?invtid= AHP512I-DT&cat=SPK> to share for about the same price as a dozen roses at the supermarket. The larger ear cups with soft padding will help keep out the noise of the airliner and that person trying to hit on your travel partner while you listen to tunes to get you in the mood for when you arrive. The drivers use a powerful Neodymium magnet for more efficiency so your MP3 player can power two headsets loud enough to overcome airplane noise. Final Words Think about what the people you love enjoy and how the gift of a geeky gadget can fit into their lifestyle. Many geeky gadgets can be customized to make them truly personal, and some Geek gifts can even be shared so you too can enjoy the benefits of your high-tech shopping.
Tech Tip 66 - Take the PCI Express for Great Video Article by Roy Davis One of the hot new technologies is something called PCI Express, a new I/O (Input/Output) bus architecture that is the first big step in this direction in at least a decade. The most evident performance measure of our computers is the speed that detailed graphics are updated. PCI Express, also known as PCIe, gives this performance a boost way past anything that was available before. 1. Whats an I/O Bus? Most of the hardware that makes your computer a computer is actually on a single Motherboard <http://www.geeks.com/details.asp?invtid=BOXD955XBKLKRDT&cat=MBB>. It has the CPU (Central Processing Unit) in the form of a single chip microprocessor such as an Athlon 64 <http://www.geeks.com/details.asp?invtid=ADA3500DAA4BW-N&cat=CPU>, and the main memory that we usually call RAM <http://www.geeks.com/details.asp?invtid=KVR333D4R25_1G-DT&cat=RAM> for Random Access Memory. These two items handle most of the actual computing functions and have a very high-speed bus between them.
But, raw computing isnt much fun, especially when it comes to video games and simulations. You need some eye-dazzling color graphics to show off what your computer can do. The circuits that generate those graphics are usually on a plug-in gadget called
a graphics adapter or Video Card <http://www.geeks.com/details.asp?InvtId=VX700512P&cat=VCD&cpc=DSP>. The video card connects to the motherboard through a connector with lots of pins. It takes all those pins to carry the I/O bus with many data bits and address lines. Generally, the CPU puts an address on the address part of the I/O bus to point at a particular I/O address location. These addresses are a lot like memory addresses except that instead of a RAM device, its an input or output device that is being fingered. Then, the CPU either sends data down the data bus part of the I/O bus or requests that the I/O device drive data back from that direction. Data goes out and data comes in thats why its called an Input/Output bus. 2. A Little History of the PC I/O Bus At the dawn of the PC era, we had the ISA (Industry Standard Architecture) bus that moved 16 bits of data at a time and was clocked at 8.33 Megahertz. The CPUs back then were 16 bit affairs, so the ISA bus was just perfect. The connector had fewer connections. Life was simple back then, but our computers were slow. In the early nineties, CPUs expanded to 32 bits, so the I/O bus had to grow up, too. The PCI (Peripheral Component Interconnect) bus has twice as many address and data pins, which made the connector a lot more complex with the pins squeezed closer together. The clock frequency took a leap to 33 Megahertz, so circuit routing was trickier and early PCI boards worked only with particular motherboards. 3. The AGP Video Bus Makes 3-D Possible At the end of the nineties, the AGP (Accelerated Graphics Port, also called Advanced Graphics Port) special purpose video bus came about. Its a 32 bit bus like its predecessor. The PCI bus was still used as the basic control interface to the video card, but the AGP handled most of the data transfers for 3-D video processing. Since the AGP was dedicated to video processing, they took liberties to change it often and it was difficult keeping up with which video card worked with which motherboard. The first version of AGP is known as AGP 1X. It doubled the PCI clock rate and moved 266 Megabytes per second with 3.3 Volt digital signals. AGP proceeded through AGP 2X, 4X, and even 8X. The data rate stepped up to 533 MB/sec, then 1066 MB/sec and finally 2133 Megabytes per second. As the speed went up, the digital signals when from 3.3 Volts, to 1.5 Volts to 0.8 Volts. There was even a 64 bit version called AGP 64. 4. PCI Express Combines and Simplifies As I mentioned, when AGP came along, it did not do away with the PCI connection to the video card. AGP was used for high-bandwidth data movement while the PCI bus pulled the strings to control the video card. The need for speed was not the only reason
for PCI Express, though it did immediately double what AGP 8X could do performancewise. Computer users want full-screen, full-motion video with 3-D rendering, and PCI Express has the bandwidth to pull it off.
One of the drawbacks to AGP is that it is a unidirectional bus. Data can flow in both directions. Unfortunately, it can only attain the 8X speed in one direction. In the other direction, its more like 1X. PCI Express simplifies things by using two separate buses, one for upstream and one for downstream data transfers. Not only does this eliminate the time wasted while switching direction, data can be flowing in both directions at the same time. That means data can be moving at 16X or about 4 Gigabytes per second in both directions. Thats a whole heap of data bouncing around! 5. Lines Turn Into Lanes All of the PC data busses before PCI Express used a standard parallel data bit design. The ISA bus had 16 bits of data and 16 bits of address. That meant there were 16 wires carrying data and 16 more wires handling the address bits. When the I/O bus grew to 32 bits, the number of wires and the associated pins on the connectors doubled. One of the problems with all these wires carrying high-speed data is something called skew. When the address is put on the address bus, you have to wait for all 32 bits to be at the right logic state before going on to the next step. When the data is put on the data bus, you have the same problem. Even minor variations in the length of the copper traces (the wires) on the circuit board can introduce skew and the clock rate has to be reduced to get the device to operate reliably.
PCI Express replaced the parallel bus with a series of serial buses. Instead of 32 data bits all clocked at the rate of the slowest bit, it has up to 32 lanes clocked at the highest rate each line can handle. Think of it like a relay race of four competing teams and four legs in the race. In the parallel AGP model at the end of each leg, all of the runners would have to wait for the slowest one to arrive before taking off on the second leg. At each hand-off of the baton, all teams have to wait for the slowest runner to make it there. With PCI Express, the signals are sent each at their own clock rate. When a runner hands off the baton, the next runner on his team can immediately go. The individual runners are not any faster, but by removing the coordination time at each handoff point, the overall process is sped up. 6. Say Goodbye to the North and South Bridges Up until now, most PC motherboards had two major chips on them other than the microprocessor. These are the Northbridge and the Southbridge chip sets. The Northbridge manages data transactions between the CPU, the RAM, and the video card. The Southbridge funnels data to and from all the rest of the devices connected to the I/O bus. Northbridge does high-speed data movement while the Southbridge deals with the slower moving I/O. PCI Express deals with the high bandwidth/low bandwidth problem in a much more elegant way. Notice above I said up to 32 lanes. The number of lanes for an I/O device can vary from 1 all the way up to 32, doubling every time. That means there are I/O configurations for 1X, 2X, 4X, 8X, 16X and 32X. Slower devices can use 1X or 2X. For ripping performance, like a video card, 16X or 32X does a better job. Even the size of the connector varies from very short for the 1X interface up to a much longer 32X connector. What is really mind blowing is that different speed cards can, within limits, be plugged into different size PCI Express slots! 7. Make the Hardware Simple! To abandon the legacy of PCI and AGP must require some huge advantages, and PCI Express has them. Its too bad that they named it PCI Express because that sounds like an evolutionary change, but PCI Express throws out the whole concept of the I/O bus and replaces it with something very revolutionary. As I mentioned, the switch from a parallel bus to lanes, which are really a bundle of serial buses, made the timing and routing of signals much easier for the hardware designer. Another advantage is that the I/O bus is scalable from one lane (1X) for simple and slower devices like a USB controller all the way to 32X for the power-hungry speed demons, such as a high performance video card. The simple devices only need a few wires run to them, saving circuit board space and many pins on the IC packages. 8. Keep the Software
Even though PCI Express is a revolutionary improvement in PC hardware, it is transparent to the software that runs on it. Yes, you need to replace both the Motherboard <http://www.geeks.com/details.asp?invtid=P5WD2-DT&cat=MBB> and all the plug-in Adapter Cards <http://www.geeks.com/details.asp?invtid=100714400DT&cat=VCD> to convert to PCI Express, but the same operating system (Windows XP, Linux, etc.) can run on the new configuration. 9. Save the Juice Our laptops are a marvel of energy-saving ingenuity, since they have to run on a battery and battery life is one thing we value in a small, portable computer. Even our desktop machines have to pay attention to power savings, so we dont need too many fans to keep them cool. For the simple I/O devices in your computer, they only need one bus driver for PCI Express 1X instead of 32 for the old PCI bus. That saves a lot of power. Even the faster devices can turn off lanes when they are not busy and conserve energy that way.
Final Words If you are thinking about a new gaming machine or an upgrade to your old faithful box, check out the offerings of PCI Express motherboards and video cards before spending money on older AGP models. You have to choose between sticking with the older technology, or making the leap to the new way of doing things. Yes, many of the new PCI Express motherboards have PCI slots to accommodate older adapter cards, but not for video. You pretty much have to match both the motherboard and video card to have PCI Express performance. That means there are bargains in high-end AGP motherboards and video cards as existing stocks are flushed out in favor of PCI Express. On the other hand, the PCI Express-based machine will carry you farther into the future.
Tech Tip 67 - High Resolution Audio Article by Roy Davis When it comes to your computer monitor, the term high-resolution is easy to understand. 1,600 by 1,200 pixels on the screen have a lot finer detail than the old 640 by 480. 24-bit color has millions of shades, where primitive color PC displays had only a total of 16 colors. Audio, be it for music, music videos or movies, can benefit from high-resolution also. Digital audio has higher or lower resolution too, though the difference is a little harder to explain. Well go though it here and soon youll be the expert on high-resolution audio in your crowd. 1. Audio Before Digital
The old analog audio recording industry was full of different formats with all different sizes of discs and recording speeds of 33 1/3, 45 and 78 RPM. We had tons of gadgets to clean our records and rid our sound of the clicks and pops that dust causes. Getting an LP out of the cover, on to the turntable, and the tone arm lowered onto the disc without scratching it was a high art. Audiotape came in reel-to-reel, 8-track and audiocassette, again with different recording speeds and an alphabet soup of noise reduction schemes like Dolby-A, B, C, S and HX Pro. Dust that flaked off the tape would gum up the transport mechanism and the tape would jam, usually destroying the sound quality for that piece of tape. Unfortunately, music signals in an analog form are subject to all sorts of degradation. Electrical circuits introduce noise and hum. The recording medium itself will roll off the
high frequencies or introduce variation in the pitch. Magnetic tape is especially bad at distorting the signal and adding noise, which is why Mr. Dolby got very rich with his noise reduction tricks. Digital audio avoids all this, but of course there are other issues to deal with. 2. Compact Disc, the Audio Standard Researchers at Philips had been working with optical discs for recording movies for some time when they shifted their attention to an audio-only disc. At first, they tried analog recording methods like their video discs, but finally decided to use PCM, a digital audio format previously used for long distance telephone links. PCM stands for Pulse Code Modulation, a very simple digital system where the audio is sampled at a constant rate and the samples represented by digital numbers. There is no compression, just the raw samples. Phillps brought Sony on board and they put together the digital audio standard that is still the most-used audio format for commercial music sales. They called it the Compact Disc, which we all shorten to CD. Since straight PCM is used, they picked the format to squeeze as much music as possible on the disc while maintaining adequate fidelity. 3. Sampling Theory In order to understand digital audio specifications, well have to define digital sampling theory. The most basic thing we need to get under our belt is the concept of converting analog music signals into digital. Thats called sampling. The electrical signal that represents music is a nice smooth wave that follows the sound pressure waves that come from a singer, or an instrument, or the combination of both. 4. Breaking Up Is So Hard to Do If we use an analog meter to follow the music signal, the needle would swing smoothly back and forth with the rise and fall of the waves. What if we use a digital meter and quickly write down the numbers, in essence, taking many numerical samples of the waveform? It turns out that if we take samples at a high rate, the mass of numbers will accurately represent the audio waveform. How fast is that rate? A researcher named Nyquist figured out that if the sampling rate were at least twice as high as the highest frequency signal to be digitized, then the sampled digital signal would accurately represent the analog signal. Since the highest
frequency most human ears can hear is 20 Kilohertz, the designers of the Compact Disc decided to use a sampling rate of 44.1 Kilohertz. Though theoretically they could have used 40 Kilohertz, there are some practical problems that require filtering, and they need a little room for the filter roll-off, so they selected 44.1 Kilohertz. Well get back to this issue of the filters and picking a sample rate that is just barely enough. 5. All the little pieces Compact Discs have just barely enough sampling rate to capture the entire audio spectrum. The other issue is the number of bits of resolution to represent the amplitude of the signal. If you only listen to AM radio where the music is loud all the time, you would happy with very few bits in your audio samples. But, if you want to listen to exciting music that is loud sometimes and very soft when it needs to be, then you want more detailed samples, which means more bits. At the time the CD was being designed, the audiocassette was one of the most popular recorded music formats. The ratio of the loudest music to the softest was about 45 decibels, or dB. Thats more than a factor of 20,000 from the lowest to the highest amplitude, which seems like a lot, but the human ear has an amazing ability to discern a huge range of loudness. Even in a noisy car, the dynamic range heard on cassettes, or ratio of softest to loudest, bothers me. There was just too much hiss. The best noise reduction (Dolby and others) managed to stretch the dynamic range was up to 60 or 65 dB, up a factor of 100 and a great improvement, but when stopped at a traffic light, the hiss was still evident. The PCM audio encoding that the Compact Disc designers copied could achieve a 35 dB dynamic range with only eight bits of resolution. But, they played tricks with the bit values to stretch the dynamic range at the expense of fidelity. Since 35 dB was obviously not enough, and digital circuits like to come in multiples of 8, the designers picked the next increment up, that being 16 bits. With 16 bits of equal step size, they had 96 dB of dynamic range, which at the time was much better than any other recording medium, including the professional analog tape decks in recording studios. 6. The Bad Old Days In 1982 when the CD was introduced, we were used to the hiss of magnetic tape and the clicks and pops of LP gramophone records. The CD was such a huge improvement that we assumed that it was perfect. Many young people today have never even heard an LP played, so CD digital music is their frame of reference. Its far better than what we used to have, but it is still not even close to perfect.
7. Higher Expectations Why would we want a higher sampling frequency than 44.1 Kilohertz and more than 16 bits of dynamic range? Because we are crazy about absolutely perfect sound! More and more we listen through earphones or 100 Watt car stereo systems where the noise floor of 16 bit audio is bothersome and the harshness of the sharp cutoff filters for the 44.1 Kilohertz sampling irks us. The dynamic range is pretty easy to understand, with the background noise level very low and the loud peaks very high. But, the sampling rate/cutoff filter thing takes a little more explanation. We are trying to record the audio spectrum from 20 Hertz to 20 Kilohertz with a sampling rate of only 44.1 Kilohertz. That means during recording, signals above 20 Kilohertz have to be sharply attenuated or they end up causing digital artifacts down in the audible portion of the frequency band. In fact, many commercial CDs roll off most sounds above 15 Kilohertz to avoid the distortions of the cutoff filters. On playback, you have a similar situation with filtering at the band edge. There are filters that can do the sharp cutoff, but a side effect of the sharp filters is phase shifts where different audible frequencies are delayed more than others. The effect is harshness to the sound that is hard to pinpoint, or even to measure with simple instruments, though sophisticated lab equipment can show the presence of phase shifts. 8. Six is More than Two In addition to our increased demand as consumers for higher quality audio, with many of us having at least some kind of basic home theater setup, weve also become accustomed to the immersive audio experience of surround sound, or sound presented over more than just two (stereo) discrete (individual, independent) channels. Recording engineers and record producers experimented with quadraphonic, or four-channel, sound in the 60s and 70s, but it required expensive, specialized equipment on both the recording (studio) and especially playback (consumer) ends, and never really caught on. While Compact Discs offered audio quality better than anything previously available, because of technological limitations, there was only so much data (music) you could fit on a single disc, limiting developers to two-channel, stereo audio. With the advent and rapid consumer adoption of DVD, all of that changed. Because of the huge increase in the amount of data that can be squeezed onto a single disc (up to 8.5 Gigabytes now on one Double Layer DVD, compared to ~700 Megabytes on a CD), theres room for not only lots of high-resolution video and high-fidelity audio, theres room for several more discrete audio channels, as well as many as six or more.
A typical 5.1 home theater surround sound configuration includes two stereo channels (front sides), a center channel, two rear side channels, and a dedicated LFE (Low Frequency Encoding, or subwoofer) channel (the .1). Because of this, some artists are beginning to consider mere CD quality, stereo recordings as giveaway-quality audio, and are releasing new music on DVDs encoded with audiophile-preferred dts (Digital Theater System) digital 5.1 surround sound. 9. More and Faster The solution to problems with phase shifts and distortion is simple: more bits of resolution to produce more dynamic range and a higher sampling rate to avoid the sharp filters required by a low sampling rate. The Creative Sound Blaster X-Fi Fatal1ty FPS PCI Sound Card is a perfect example of high-end, high-resolution audio equipment for your computer. It has state-of-the-art 24-bit data paths for a dynamic range of 109 dB, 13 dB better than the 96 dB of CD audio. Since dB is a logarithmic scale, 13 dB translates into a noise floor 20 times lower. The sampling rate has been cranked to 192 Kilohertz, which allows for a very simple and non-distorting cutoff filter. For those with younger ears that can hear past 20 Kilohertz, this higher sampling rate expands the frequency response. Even for older people with more limited high frequency hearing, the improvement is evident in a smooth phase response down to lower frequencies and fewer digital artifacts (think of an artifact as the digital version of a speck of dust on a record). 10. Budget Sonic Excellence You dont have to break the bank to have a big step up in audio quality. Even an inexpensive internal sound card like the Creative Sound Blaster Live! 24-bit PCI Sound Card has specs way beyond Compact Disc capability. You get the 24-bit data path for the lower noise floor and more headroom for recording. The sampling rate goes up to 96 Kilohertz, which is still a step up from the CD class. The 96 dB dynamic range of 16-bit CD quality is fine if the program material has been carefully recorded and processed to fit. If you are doing your own recording, the additional dynamic range of 24 bits really comes in handy by allowing more headroom. You dont want to have music peaks clip, so backing down a bit on the volume allows more space at the top. With the lower noise floor, the low volume is not a problem. Even this bargain basement sound card has the guts to give you this level of performance. Final Words Compact Disc digital audio broke through the barrier to high quality sound. Now that the restrictions of analog recording media are behind us, we expect more sonic depth to our music and sound tracks. We can have the loud peaks and the quiet passages without intruding noise or hiss with 24 bit dynamic range. Boosting the sampling rate for both recording and playback opens up the distortion-free frequency range to the full spectrum of hearing. It doesnt matter if you are building a killer PC-based sound system for the home theater, or just outfitting
your home office machine to record your old LPs on the side, there is a sound card with advanced performance for you.
Tech Tip 68 - Step Up to Double layer DVD Article by Roy Davis In Tech-Tip Number 7, Mr. J. Kohrs explained the alphabet soup of DVD formats. Double layer DVD writers and the blank discs for them were just hitting the market then so he didnt have much to say about the latest and largest-capacity optical disc system. Since then, DL drives and media have popped up all over at decent prices so its time to dig a little deeper. That last sentence is a bit of a pun on the whole double layer thing because it works by burying your data a little deeper into the disc. Well discuss why double layer is so exciting, and when you can economize by using the less expensive single layer discs. 1. Refresh on How DVDs Store Data Most explanations of how optical discs work start with an allusion to LP records with a track that spirals across the face of the disk and a pickup the follows the track to extract the data stored there. Unfortunately, the flat disc and the spiral are about the only things in common. Optical discs like CD-ROM and DVDs are made up of a clear plastic disc with a layer of very thin metal buried just under the surface of the plastic. The track is actually molded into the plastic, a thin metal layer is laid over the plastic, and the whole thing is sealed up with a clear lacquer finish. 2. Not Grooves: A Trail of Bumps LP records are easy to visualize because they use a V-shaped groove that forms the track. The sharp point of the pickup fits down in the groove and the groove wall pushes the pickup to keep it tracking the spiral. Optical discs are completely different, with a laser light focused into the spiral track of bumps. An optical sensor picks up the reflections of the bumps and electronic tracking circuits command tiny motors to move the pickup to keep it aligned with the track.
Notice I said track and not tracks? There is a single track that starts at the inside near the center hole and spirals out, just the opposite of the LP record. Its not concentric tracks like a hard drive or floppy disk. The disk could be any size up to the maximum of 120 millimeters, about 5 inches. There are smaller optical discs available, all the way down to business card-sized with only a few dozen Megabytes of storage. Speaking of tracks and dimensions, they pack almost 8 miles of data in that single track. The double layer DVD disc has about 15 miles of storage track. That means the track has to be wound pretty tight with a pitch of only 0.74 micrometers (millionths of a meter) between them. That takes some pretty precise tracking! 3. Ones and Zeros Become Lands and Bumps Along the track, there are flat reflective areas called lands. This is really just the non-bumped part of the disc surface. Then, there are the nonreflective bumps. A flat reflective area represents a binary 1, while a nonreflective bump is a binary 0. The DVD drive shines a laser at the surface of the DVD and can detect the reflective areas and the bumps by the amount of laser light they reflect. The optical pickup converts the reflections into 1s and 0s to extract digital data from the disc. This describes how commercially-pressed audio CDs, CD-ROMs and DVD movies work. They are read-only devices with the simplest construction and are the easiest to explain. A recordable disc, however, also needs to allow the drive to write data onto the disc. In order for a recordable DVD-R or DVD+R disc to work, there must be a way for a laser to create a non-reflective area on the disc. These discs have an extra layer that is a dye that can be changed by shining a strong laser beam on it. On a blank recordable disc, the entire surface of the disc is reflective. The laser can shine through the dye and reflect off the metal layer. When the drive writes data to the disc, the laser heats up the dye layer and changes its transparency, which is the equivalent of a non-reflective bump. 4. The Trick of Double layer Now we know how a single layer DVD works, both the prerecorded type and the ones you can burn at home. Just how the heck do they put two layers of data on one side of the disc? It would be real easy to say magic at this point, but the real explanation is pretty simple.
Think about how when you walk up to a window with a screen and look out that you see the scene outside and dont even see the screen. Its close to your face so its out of focus and you dont even notice it is there. If you back up a little and force your eyes to focus on the screen, it pops right out and you can see it and the scene outside is all a blur. Double layer DVDs pull a similar trick. There is only one reflective layer, but there are two layers of dye where the actual data is stored. The lens in the pickup focuses the beam on the top layer to read the first bunch of data, and then the lens focuses the beam on the bottom layer and sees right through the top layer. Because the top layer is out of focus, the data stored there just disappears and the bottom layer is read instead. All that the build up and detailed explanation to find out its a simple trick of optics that even your own eyeballs can do! 5. So Whats the Benefit? When recordable DVD media first hit the market, it hadnt grown up yet and capacity wasnt too much bigger than CD-R. As DVD-R and DVD+R came of age, the capacity of a single-sided disc settled on 4.7 Gigabytes. That was enough room for a two-hour medium resolution compressed movie. Its also a handy size for normal backups of your hard drive or all the digital photos from your vacation even if you shot them all at the high quality mode. But, what if you want to record a truly high definition movie? It wont fit in 4.7 Gigabytes. Even a medium definition movie wont fit if it extends past two hours. How many movies come with a separate disc for the extra features? Its a pain to have to get out of the easy chair to change discs. The double layer DVD solves this by having 8.5 Gigabytes of storage without having to flip the disc. 6. What Do I Need? Naturally, older DVD drives dont have the mechanism to switch focus between the two levels of a double layer disc. The pickup has to be physically moved to change the focus point from top to bottom, so you need a drive with this built in. The LG 16x Double Layer DVDRW/DVD-RAM IDE Drive is typical and attractively priced. Computer drives that can read double layer usually also write double layer and thats the case here. Be aware that double layer DVDs have to be written at the 4X speed as opposed to the 16X for single layer discs. 7. Blank DVDs Are a Bargain CD-R media are really inexpensive these days, with recordable DVDs being a little more expensive. But, are they? A single layer DVD-R or DVD+R can hold as much as seven CD-Rs. That means that if a DVD is less than seven times more expensive, it is actually
cheaper than a CD-R for those large data storage tasks. They are also a lot more convenient than shuffling a stack of CD-Rs in and out of your drive. If you just want to test the waters without springing for a tall spindle of blank double layer DVDs, try the Verbatim Double Layer Solution Kit (DVD+R, DVD+R DL, DVD+RW, which gives you a sampling of three different blank optical disc types. If your storage needs are less than about 4 Gigabytes, then stick with the single layer discs. 8. HD DVD verses Blu-Ray While double layer DVD seems like a huge amount of storage, the requirements of super high definition video and huge hard drive backup push the optical drive manufacturers to even larger capacity discs. HD DVD is a refined version of the DVD we use now. It uses the same trick of double layers to almost double the capacity up to 30 Gigabytes per side; backing up a full image of a 160 Gigabyte hard drive takes a half dozen discs. Blu-Ray answers this with the promise of up to 200 Gigabyte discs eventually becoming available. They pack the data in even tighter than HD DVD and can stack up several layers to increase storage. Unfortunately, HD DVD and Blu-Ray will probably be only available as commercially produced DVDs for viewing movies for the near-term. Somewhere down the road, well start seeing recordable versions to mount in our computers drive bays. Final Words While recording movies on single layer DVDs might be fine with the old standard TV, now that you have a widescreen flat panel television that is capable of HDTV, why suffer degradation of image quality by over-compressing the video? Get a double layer DVD drive for your computer and stock up on double layer DVD blank discs to capture all the detail. The same goes for your computer backups. Put your whole photo collection on one 8.5 Gigabyte double layer disc. Dont worry about running out of space on a single disc. Though the double layer discs may be more expensive, they hold twice as much and take up less storage space than a pair of single layer discs or a dozen or more CD-Rs.
Being the dedicated geek that I am, I spend a lot of time with my face in the computer screen. Much of that time, I am driving a cursor around the screen using a pointing device of some kind. In this installment, I want to talk about the long-term physical effects of computer pointing, in other words, the ergonomics of riding a mouse. We will look closely at how to use a computer-pointing device for maximum comfort and minimum wear-and-tear on your arm and shoulder. This is not something to ignore unless you want to be too crippled in your geek old age. Dont laugh, it really happens, as I will personally relate. 1. Keyboard Injury While our topic here is pointing devices, it helps to quickly review how keyboards can hurt us. Most people have heard of a type of injury caused by the long-term effects of typing on a keyboard called RSI, or Repetitive Stress Injury. The damage is usually done to the carpal tunnel in your wrist where the nerve passes through the wrist joint. The wrist, when bent back to align with the keyboard, can squeeze the nerve that runs through the wrist joint. The nerve sheath becomes inflamed and swells, which further crushes the nerve. Once started, it is very difficult to stop this cycle and repair the damage completely. Often, you have to totally give up typing for several weeks, and then wear a wrist brace to hold your wrist straight for many months until the injury is healed.
The protection from keyboard injury is pretty straightforward. Use an Ergonomic Keyboard where the keys are angled so your wrists stay straight as you lay your fingers on the home row of keys. I gave up using straight keyboards many years ago and now only use straight keys on my laptop when I am away from a desk and have no other choice. My desk is arranged so that my arms are straight and wrists are not bent back while typing. The proper relationship between keyboard height and seat height can help here. One solution is a keyboard tray that fits under the desk and lowers the keyboard. Another alternative I like is to put your keyboard in your lap so your arms are completely relaxed. A Wireless Keyboard gets rid of the tangle of wires. 2. Ouch! My Elbow Hurts So, Im sitting here at my custom computer desk. I have the ergonomic keyboard and my desk was just right so my arms were completely relaxed. I should have no problem wailing away on my computer day and night, right? Then, why does my right elbow have shooting pains and my shoulder killing me? Why cant I straighten out my arm? I thought I did all the right ergonomic things, so why is working on my computer so painful? This is the real story of my experience, not some made-up example to illustrate a point. I had to learn this the hard way! 3. Mousing Around At the time, I was doing a lot of engineering work and that involved a lot of drawings done on my computer. Drawings tend to need a lot of mouse work to get the lines in just the right spot. It doesnt make a difference if you are designing a computer chip, updating the company organization chart, or laying out the plays for your Pop Warner football team, drawing on the computer comes into everyones life. When the drawing on my computer was zoomed, it filled the screen and I could see the whole drawing where the lines had to be placed with extreme care to get them to line up. With a complex drawing, it was almost impossible to get exact alignment in the full-page view mode. It helped to zoom in to expand the parts where I was working. The line placement was easier not only because I could see it better blown up like that, but also because the mouse movement was not as critical to get the line where it belonged. Of course, I ended up going back and forth between the full-page view and the expanded view to
figure out where I was and where to place the lines. That made for a lot of extra mousing around. 4. Snap-to-Grid One trick I learned was that I could use the Snap-to-Grid feature of my CAD program. Instead of tensing up while trying to hold the mouse just right to get the lines exactly where they belonged, I could relax a little and just get the end of the line close. The end of the line would then snap to the grid position. When the Snap-to-Grid feature is turned on, a series of tiny dots in rows and columns are overlaid on the screen. This is the grid. When you place the end of a line, it will snap from the place you put it to the closest grid point. This makes laying out your drawing much easier, faster and neater. Use the widest grid spacing that allows placement of the lines where you need them. Wide grid spacing means you can be pretty sloppy with the placement and still get nice even alignment. You can accomplish your work so you dont have to tighten up your grip on the mouse. You can relax and use more fluid movements that take the stress off your arm. Many computer graphics applications have a Snap-to-Grid feature or something close to it (no pun intended). Sometimes, a program will have a Snap-to-Object feature, which achieves a similar result. You might be placing boxes that represent people in an organization. The connection lines will automatically snap to the center of a side, top, or bottom of the box making it real easy to get the lines neatly spaced. 5. Say Goodbye to the Mouse Even after using all my tricks, my elbow kept getting more and more painful. I couldnt sleep at night and I ended up working left-handed on the computer. That wasnt a solution because I could see that soon I would have two elbows that were crippled. I noticed that after using the mouse extensively, my elbow hurt more than ever. I took a week off of the computer and my arm didnt hurt as much. When I went back to work, the pain came right back the first day. Finally, I realized that as I held the mouse, I twisted my arm to an unnatural position. To make accurate mouse movements, I had to tense up my arm and shoulder muscles. This combination was the source of the pain. Unfortunately, to use a mouse I had to make these movements that are so painful. It became clear to me that I had to find an alternative to the mouse. 6. Lets Try Something Different. Being the gadget freak that I am, I had accumulated an assortment of computer pointing devices. One of these gadgets was a cheap trackball. I had discovered that the mouse is a more natural feeling pointing device so the trackball was relegated to my junk drawer. I bought a new
Logitech Trackball Marble Mouse that has a different design than the traditional trackball. Instead of having the ball imbedded in the device with only a small part of the ball exposed, the Marble Mouse has the majority of the ball sticking out where you can touch it. 7. The Marble Mouse This exposed ball concept is great. You can manipulate the ball with just the tips of your fingers while keeping your forearm in a natural position. You do not have to twist your arm as you have to do for a mouse or even an old style trackball. I use my forefinger to steer the cursor with the ball with occasional help from my middle finger if Im scooting across the screen quickly. My thumb presses the left mouse button while my middle or ring finger can operate the right button. You can switch around if you feel comfortable in other configurations. It takes a little getting used to, but I find the Marble Mouse much more responsive than a standard mouse. The Marble Mouse is only one example of the modern trackball. There are many others by Logitech and other manufacturers. Of course, the trackball has not been left out of the wireless revolution. There is a Cordless Optical TracMan Trackball Wireless Mouse. The base station plugs into your USB or PS/2 port. The wireless trackball runs on a battery and has no wires at all. You can put it anywhere on your desk (or your lap) that is comfortable without getting tangled up in cords. 8. Works for Southpaws Too If you are lucky enough to be left-handed, you probably despise all these computer pointing devices that are shaped for right hands. The Logitech Trackball Marble Mouse is exactly symmetrical so it is neither right nor left handed. The ball is exposed on both sides and the buttons are exactly the same. If you go into the Control Panel and to the mouse applet, you can switch the left and right mouse buttons. By doing so, left-handed people can have exactly the same comfortable pointing experience as right-handed people. 9. Pen Tablets An input device that many people find more comfortable to use than standard or trackball mice is the pen tablet like those made by Wacom. Working with a pen tablet is a smooth and natural motion because every point on the tablet has a matching point on the screen. When you move your pen over the tablet, the cursor moves exactly the same way on the screen. Where you touch your pen tip to the tablet is where you click. This motion can help alleviate wrist and hand pain and avoid or minimize the effects of repetitive stress.
Final Words This issue of computer pointing device ergonomics cannot be taken lightly. A senior member of the staff at work abused himself so badly that even with a brace on his hand, he could not use the mouse or the keyboard. We had to hire a typist for him while he suffered a long recovery. I consulted with the typist who has a strong IT background and discovered that he came to the same conclusion that I have. Get rid of the mouse and invest in a modern trackball. Following his advice, I just ordered three more Marble Mice, one for my mom, and one each for my wife at work and at home. They are a lot cheaper than wrist braces, physical therapy and stand-in typists.
Tech Tip 70 - Print Your Snaps With PictBridge Article by Roy Davis We all like taking lots of photos of our family and friends everywhere we go. Some of us still have to wait for the fat envelope full of prints to arrive from the photo-processing house to see how our snapshots came out. But, most of us Geeks have long since switched to digital still cameras and can instantly see the results on the tiny LCD screen on the camera. What is missing is a photograph on a piece of paper. Its somehow more satisfying to shuffle through a stack of prints. Now, you can take advantage of a new interface for digital cameras that allows you to take photo files directly from your camera to your color printer. Its called PictBridge and this new specification breaks down the barrier between cameras and printers of different brands. Eventually at some future time, PictBridge technology will be available in electronics other than digital cameras that store, view, or capture digital still pictures, including camera phones, Personal Digital Assistants (PDA), and digital video cameras. You will be able to print from any of these PictBridge-enabled electronics to any PictBridge-enabled printer of any make or brand. For now, well just discuss this technology as it relates to digital cameras. 1. USB Photo File Transfer Digital cameras used to use serial ports or custom docking stations to transfer the files of data that contain your photos to a computer. The software drivers were proprietary and very fussy. I remember spending hours trying to make my camera talk to an IBM laptop, and even then only getting it to work by turning off the IrDA infrared port. Universal Serial Bus, or USB, came to the rescue and simplified the basic interface between cameras and computers. You no longer had to load a special software driver on the computer for an ordinary file transfer. Then, printers started using USB instead of the specialized and very clunky parallel printer port. Ah ha! The stage is set
for direct camera to printer photo transfer, at least at the physical electrical level. Both the camera and the printer have to talk USB. 2. The Hard Way The traditional way to get your digital photos from the camera to the paper is to fiddle with the cable to upload the photo files to your computer. You need to have photo-editing software loaded on your computer and then spend time learning how to run the software. I dont know why, but photo-editing applications are about the most difficult software to learn. Even the icons are baffling to look at. After struggling to upload and edit the photos, you then have to send the photos to your printer. Most photo editors can only print one picture at a time. Getting multiple images on one sheet of paper is a chore, especially if you want them evenly spaced with nice white borders around them. How many people do you know who fill up the huge data storage card for their digital camera, and then park the camera until they can get around to dealing with the photos? Its just like all those rolls of undeveloped film in the drawer - it takes too long. 3. Proprietary Camera-Printer Links Camera and photo printer manufacturers saw the problem. People want instant paper prints in their hand. The Polaroid InstaMatic cameras solved the problem with some chemical magic, but the film was expensive, the process was slow, and the quality of the photos was horrible. In this digital age, there has to be a better way. For those companies that had both a camera and printer division, they brought out models of both devices that could directly interface. You could take the USB cable, connect the camera to the printer, and go straight from the camera memory to a paper print. The trouble was that you needed the special model of both the camera and the printer that had the proprietary link. What if you already had a perfectly nice printer and saw a camera from another manufacturer that attracted your eye? You were out of luck for direct camera to printer transfers. 4. PictBridge Breaks Down the Barrier The digital photography industry saw the bigpicture problem and set up a committee to resolve it. The result is known as PictBridge, a way to bridge your pictures from your camera to your printer. It was also a way to shift the holdouts from disposable cameras and drugstore prints into the do-it-yourself digital camp.
PictBridge is an industry standard that allows a camera from one manufacturer to talk to a printer from some other company. It also means you are not tied to your custom setup. If you are visiting relatives and snap some shots of the family you want to share, you can connect to the PictBridge printer they happen to have and pop out a few prints to pass around.
5. Print One, Print Them All The simplest way to take advantage of PictBridge is to use the view screen on your camera to select an image. A PictBridge-equipped camera, when connected to a PictBridge-enabled printer, will display an option to print that photo. You can work your way through the photos in your memory (the one in the camera, not in your head) to select each picture you want printed. If you are an especially talented photographer and every shot is a winner, or you dont care about wasting a lot of expensive photo paper and ink, you can select to print all the photos in memory. Watch out for this one. Since taking digital photos is basically free, and high capacity memory cards can hold hundreds of images, you can burn through a hundred bucks worth of paper and ink in a single sitting. Its better to be a bit more selective! One way to do this is to make an index print, what chemical photographers would call a contact sheet. Its just like a thumbnail preview of all your photos only its printed on a single sheet instead of your computer screen. Since we cut the computer out of the deal here, the index print becomes the point of reference. Review the shots, pick the ones you really want, and print away. 6. Order It Your Way with DPOF All that selecting and printing from the camera view screen is fine up to a point. Its boring to sit around with your camera while your printer grinds out each print. The photo industry stepped up to the task again with a new specification called DPOF, or Digital Print Order Format. The concept of using the camera view screen and selecting the images to print, the number of prints desired, and the size to be printed is pretty much the same. Whats new is that you can do all this previewing, sorting, adjusting and sizing with just the camera in your lap in the back seat of the car on the way home. The printing instructions will be stored until you hook up to the printer. The printer can then do its thing while you go take a shower, and youll have a stack of finished prints waiting when you get back.
DPOF isnt even limited to digital camera enthusiasts who own a PictBridge color printer. You can take the memory card from your camera to a photo service. There are even do-it-yourself machines coming on the market where you can stick in your memory card with the photos and DPOF instructions, insert a credit card to pay for it, and walk away a few minutes later with your prized snapshots on paper. 7. Real Cameras on the Market This PictBridge thing is not just puff-words from some photo industry organization. There is really equipment on the market that supports PictBridge and DPOF. An example is the Kodak EasyShare V550 5 Megapixel digital camera. PictBridge and DPOF are right there in the long list of specifications. Its a pretty typical pointand-shoot type of camera that anyone would find easy to use. The minimum number of controls keeps your eye on the scene instead of fiddling with buttons. Five (5) Megapixels means the resolution of your photos can run all the way up to 2,576 x 1,932 pixels. Thats plenty of image detail to fill the largest paper size your printer can handle. No worries about fuzzy prints unless you shake the camera in dim light. 8. Camera Features for PictBridge There are a number of special features that make a camera outfitted with PictBridge even better. The V550 includes all of these to automate the picture-taking process even more than you realize. They save a lot of steps that normally you would have to use a computer-based photo editor to achieve. 9. Camera Orientation Does Matter Camera Orientation Detection sounds unimportant or minor, but if youve ever had to review a few dozen photos you took that were oriented in various positions other than upright, you know it can take time and effort to reorient the photos for comfortable viewing. Many times, Ive had to go back and manually rotate the images so people didnt look like they were lying down. This is especially important when printing more one than image on a single sheet of paper. Cameras like the V550 that come with Camera Orientation Detection can even rotate images in the camera if you want so all your photos are right-side up before going to the printer. 10. Photo Touch-Up without the Computer Another camera feature that the V550 sports is red-eye reduction. Even with a computer-based photo editor, it takes some skill to do this touch up. Its a common photography problem
most of us have had to deal with at one time or another. Red-eye shows up when the light from the flash on the camera illuminates the inside of the eyeball. It makes the subject look like some sort of demon and is very disconcerting in a photo. The trick is to paint over the redness with a dark color that matches what the pupil normally looks like. If not done correctly, red-eye removal can make the eyes look very strange, an effect most people dont want in their shots. The V550 can automatically detect and correct red-eye for you. Thats pretty tricky business, but it can save images and get them ready for printing without a trip through your computer. 11. Cropping Isnt Just for Farmers In the heat of the picture-taking moment, its hard to get the edges of the image right where you want them. Often, extraneous and distracting elements are included near the edge of the photo. Most often, you didnt even know the bad stuff was there until you see the image on the view screen, and by then its too late or is it? The Kodak V550 has the ability to crop images. That means it can trim off the unwanted parts of the photo right there on the view screen. Then, the files will be all ready to go to the printer without further work. Slice off that hand sticking into the edge of the frame. Even pick one headshot out of a group picture if you want. Final Words Digital photography is no longer the domain of only those who are serious enough to spend hours photo shopping their images and waiting patiently by their printer to see how it comes out. Even if you just want a stack of 4 x 5s to pass around at the dinner table with minimum of fuss, you can circumvent the pocket full of film rolls and the 1-hour lab and make your prints right at home. For now, look for a digital camera and photo printer with PictBridge capability, with the view towards watching for this new technology in many other electronics that take, hold, or store digital photos. DPOF is the icing on the cake for taking total control over your digital photo printing. Even if you use a commercial photo printer, PictBridge and DPOF will let you use odd moments to do your photo selecting and get your prints done quickly instead of languishing in your camera.
Tech Tip 71 - FRS Radios for Geeks Article by Roy Davis With Instant Messaging (IM), Voice over IP (VoIP), Short Message Service (SMS), and dozens of other computer-based communications acronyms stuffed in your Geek memory, how about this one FRS? Ill give you a hint - it doesnt run on your computer, but its a way to stay hooked up with your buddies that you shouldnt pass up. Sometimes, you need to stay in touch with your family or a bunch of friends instantly, but dont have your computer screen in front of you to IM. Cell phones are fine to talk to one person, but how do you hail a group of people right now? You might have overlooked FRS, or just dont know really what its about. Lets take a look. 1. CB Its Not! Your idea of two-way radios might be CB, or Citizens Band, right out of Smokey and the Bandit. Sure, the guys driving 18-wheelers still use CB radios, but we are geeks and were not interested in the latest location of radar speed traps on the Interstate. Truckers use radios that are technically Class D Citizens Band. They operate at 27 Megahertz (MHz), which is down in the shortwave radio band that is full of skip. You might occasionally be able to talk to someone hundreds or even thousands of miles away, but mostly you have just too many people trying to talk to each other. Class D CB uses Amplitude Modulation (AM) which was popular and cheap fifty years ago when Class D was designed. Unfortunately, AM is very susceptible to interference with all sorts of squeals and howls.
2. FRS Has Roots in Class A Actually, there was another form of Citizens Band radio that predates the 10-4, good buddy type. It operated in the UHF (Ultra High Frequency) band near where commercial two-way radios for police cars and delivery trucks are now. This was Class A Citizens Band, but almost no one had heard of it because the equipment was too expensive, the rules too restrictive, and it required a complicated license from the Federal Communications Commission, or FCC. In the eighties, this part of the radio spectrum was converted to the General Mobile Radio Service, or GMRS. The FCC license was easier to obtain, the rules relaxed, and the equipment less expensive and easier to operate. The GMRS radios used Frequency Modulation (FM) that rejects interference better than AM CB radios. Even if another station is on the same channel, but with a weak signal, you can usually talk right over them to your close-by buddies and the other party probably wont even know it. 3. FRS Brought to You by Tandy GMRS was mildly successful, which well get back into later, but electronics retailers wanted to sell more radios. The biggest retailer of all pushed a new concept called Family Radio Service, or FRS. That retailer is Tandy Corp., but we all know those folks as Radio Shack. In the mid-nineties, Tandy went to the FCC and convinced them that this new FRS could share the radio channels with GMRS. Instead of the 5 Watt mobile radios common in GMRS, the FRS radios were usually handheld units limited to 500 milliWatts, which sounds better than half a Watt. 4. Putting FRS to Work What can you do with an FRS radio? Suppose your gaggle of friends descend on the local swap meet looking for an 8-inch floppy drive to finish off that early microcomputer for your computer museum. You split up to cover all the aisles as quickly as possible. How are you going to rally the troops when you spot the object of your desire? If everyone has an FRS radio plugged into an earphone, the moment you press the Talk button, they will all hear you. No dialing numbers or setting up chat groups. Just press and talk.
5. Privacy Codes I mentioned the interference and lots of people talking on CB radio. Its something you certainly dont want to listen to for very long. Even on FRS, there are others sharing the channels you are using. Many FRS radios now have privacy codes available. When you set a privacy code on each radio in your group, all other transmissions are ignored. Only when a signal with your privacy code is present will you hear the call.
Privacy codes are not new to two-way radio. Most privacy codes are implemented with CTCSS, or Continuous Tone Coded Sub-audible Squelch. Thats a mouthful, but the idea is pretty simple: by adding a low frequency tone to the signal on the transmitter side, your signal is now unique. The receiver will filter out all other signals and keep the speaker quiet. When the receiver hears your tone, the speaker is activated and the recipient hears only your call. 6. Look Ma No Hands! FRS has some real practical uses too. Suppose you have your head in the wiring closet trying to troubleshoot a network problem and you need to talk to your helper who is running tests on the equipment in the other room. You can try yelling down the hall, but that bothers the other workers and its hard to yell when you have a punch-down tool in your mouth because your hands are full of wires. Some FRS radios come with VOX, or Voice Operated Switch. When the radio microphone picks up your voice, it automatically switches on the transmitter. Thus, you and your helper can talk back and forth almost like you are in the same room. VOX is real handy this way, but it isnt perfect. There is a tiny delay when the VOX circuit picks up, so it will cut off the first part of your first word. If you get in the habit of says something like, Ah, George did that fix it? you will not have trouble with a first syllable clipping problem. I know my FRS radios are indispensable while adjusting the UHF antennas up in the attic for the best reception of HDTV signals. My wife watches the signal quality bar on the wide-screen LCD TV downstairs while I twist the antenna back and forth while hanging from the rafters. We live in neighborhood with lots of ghost signals on analog channels so the clean digital signals make a huge difference.
7. Economical, Too FRS radios are small, portable and can be really inexpensive, too. Theyre cheap enough to keep a pair in the toolbox or in the glovebox of the SUV. Take a look at the Xact X-Link Digital Watch & 22 Channel FRS/GMRS 2-Way Radio right here at Geeks.com. For less than a round of greaseburgers at your favorite fast food joint, you can sport one of these gadgets. You can clip it on your belt or use the strap to hang it around your neck. Having it closer to your mouth and ears makes the VOX operation better. 8. Ranging Out The specification most FRS manufacturers quote is a 2 mile range. FRS radios use UHF radio frequencies that depend on line-of-sight for maximum range. The signals will penetrate buildings and trees when you are close by, but especially after a rainstorm when everything is still wet, you are talking a few hundred yards. If you want range, sit on a hilltop and your distance will pick up significantly. If both parties get up on hills or
even mountains, you are going to range out to dozens of miles. Thats the nature of UHF radio. FRS is intended for close-in communications, so being within a block or two will usually work pretty well. 9. GMRS if You Get Serious I mentioned that to use the GMRS channels, you need an FCC license. The license is expensive, at about eighty bucks for five years (a little over a buck a month), but one license is good for all of your immediate family members. The GMRS channels are generally less busy than the FR-only channels, so you will get less interference and better range. For more details on a GMRS license, you can go right to the FCC Web site and get the straight scoop. The FCC tries to do all their licensing via the Internet through their Universal Licensing System. Its a little confusing, but read all the instructions and soon you can separate yourself from FRS-only users. 10. Repeaters for Really Getting Out I explained that if you get up to a high spot, your range would extend considerably. If you upgrade to GMRS, you can avail yourself of a feature called a repeater. Years ago, radio amateurs started putting radios on tall towers and mountain peaks and set them up so they could listen to one channel and repeat the signal to another channel. GMRS copied this scheme. Thats why GMRS channels come in pairs split between 462 and 467 Megahertz. With your simple low-powered radio, you can send signals that are repeated from the mountaintop to a huge coverage area. There are commercial GMRS services that will sell you repeater service, but you dont have to go that way. There are public service organizations that maintain repeaters that you can use for minimal club dues. One such group is REACT, or Radio Emergency Associated Communications Teams. They are known for quick reporting of traffic accidents and supporting other emergency services, such as the Red Cross http://www.redcross.org in large-scale disasters. Final Words Pick up a pair of FRS/GMRS radios while the price is right. They might just be a handy gadget to save some steps while shopping, visiting a theme park, or fixing something around the house or the workplace. Or, they might lead you to a whole new hobby with involvement in local emergency organizations. I hope understanding the features and uses of FRS and GMRS services will help you choose what is best for you without getting confused by the legal gibberish of the FCC or the claims of the equipment manufacturers. It really is a lot of fun for the whole family to be able to communicate even in a crowded shopping mall. It gives everyone more freedom and peace of mind and who couldnt use more of that?
Tech Tip 72 - The Facts About MySpace What You Need to Know Before Your Kids Sign On By Kimmy Powell Even if your children dont have an account, theyve certainly heard enough about it through word of mouth. The latest hangout for tweens and teens isnt the mall in downtown or any particular club, but rather the chaotic world of cyberspace, at a website called MySpace.com. As the hottest social networking site on the web, MySpace has accumulated an estimated 54 million users in just three years of existence, with many users falling within the teen and twenty-something crowds and approximately 19% under the age of 17. MySpace is a wonderful place to meet others of like mind and interest, but with this sharing of interests come other dangers that are heightened by the unregulated nature of the Internet and the ready availability of personal information online.
Why MySpace? Teens are constantly looking for places to hang out, away from parents and school. MySpace gives them just that a place to socialize with others of similar interests and tastes. The biggest attraction in using MySpace is its ease of use and the number of friends one can meet online. Setting up an account in MySpace is free and easy. The only requirements are that users must be at least 14 years of age. The information needed to start an account includes: Valid e-mail address First and last name
Password for the account Country Zip code Date of birth Whether or not to make the date of birth public Agree to Terms of Service and Privacy Policies Once users agree and provide the requested information, they are invited to create personalized profiles. Profiles can include pictures, videos, or music. Members can link their profiles with other friends on MySpace. They can create shareable blogs (journals). Friends or strangers can post comments to these profiles, with users doing likewise on other profiles. Members can form buddy lists with groupings of friends and interesting people.
Kids who otherwise have trouble socializing with their peers on a school campus will like the appeal of MySpace, where there is no face-to-face interaction required, no popularity contests, no need to hide behind a mask. You are who you are, or who you think you are. The Dangers of MySpace While MySpace is great for making new friends and promoting oneself, there are several drawbacks in allowing your children to roam freely online within the MySpace framework. First and foremost, MySpace profiles are public. Anything posted on a public profile can be read by other members, and anybody in the outside world can get to a MySpace profile. Children often disclose too much personal information (i.e. name and addresses, school names, classmates, teachers, birthdates, favorite hobbies) on profiles, which attract child predators lurking on the site. These predators seize upon details left in blogs, comments, and personal profiles to take advantage of these kids when parents arent home, or when kids are at school. Secondly, teens love to gossip. The same problems that torment kids at school are magnified tenfold on MySpace. Gossip, malicious rumors, bullying and racial slurs are posted on a public forum to an audience of millions. This can seriously lead to problems in the future where there is a possibility that a college denies
admission or an employer looks elsewhere in recruitment. Saying anything now can hurt later on. Thirdly, people arent who they say they are. A valid e-mail is the only requirement for membership on MySpace and any other identifying information can be faked. There are no controls in place within the MySpace system to actively check the validity of current members. The only time somebody is caught is when MySpace explicitly catches someone violating its policies. Thus, child predators can masquerade as teens, gain their trust, and use it to their advantage. Likewise, teens lie about their ages and get access to materials otherwise denied to them. The lack of parental controls and the relatively easy access to inappropriate materials have prompted some parents and schools to entirely remove access to the site from home and school computers. Parental Involvement Increasing surveillance or outright banning your kids from using MySpace may seem like a safe bet, but it can breed rebellion and worse. Kids with existing accounts can hide profiles so that parents cant access them. If your child wants to access MySpace, your child will find a way. So, how do you prevent your kids from becoming unwitting victims to the murky waters of MySpace? CyberTipline, an organization that works to prevent the exploitation of children, recommends establishing lines of communication and trust with your teen and educating yourself about the world of MySpace. In fact, awareness is the key to prevention. Communication Make sure kids are aware of issues of online safety. Discuss with them what they can and cannot do online. Provide an open environment where your children can share and report what they encounter online. If you react negatively and take away their Internet privileges at the slightest infraction, youre not creating a place of trust. Kids will be afraid to come to you with legitimate concerns if they feel they cannot trust you. Be reasonable and try to understand the issues your children are facing. Remember, parents were once children too.
Tell your children to avoid putting personal information onto profiles or online blogs. Let them know that leaving too much personal information can come back to haunt them.
Education Find out everything you can about MySpace. Educate yourself on the features and potential hazards of having an account. Monitor what your kids do online. Search on Google by email address, name, nicknames or school names and see what you find. Turn the online experience into a family adventure. Ask your children about the latest happenings online. Have them show you the hotspots on the net and what topics theyre currently interested in. Dont believe the person behind the profile. Make sure your child understands that anybody can create an account on MySpace and lie about who they really are.
Communication with your children is the best way to make your child aware of online dangers. Most kids avoid doing things online that draws unwanted attention. Instead of banning children from the site when they do something wrong, sit down and talk about common sense. By keeping open channels of communication open on both sides, youll be happier for it.
Tech Tip 73 - Installing a DVD/CDRW Drive By Shane McGlaun Installing a CDRW or DVD drive in your PC is a relatively painless procedure. However, if you have never opened your computer before, it can be a little intimidating. The first time you see all the wires and cabling inside your computer, you may think you are in over your head. DONT WORRY-YOU CAN DO IT! Installing a CDRW or DVD drive inside your computer does not have to be a scary proposition, nor do you have to pay the high fees most electronics stores want to install the drive for you. With this easy to follow guide, a few minutes of your time and the required components, you can install your own CDRW or DVD drive and be burning your own movies and music before you know it. Step 1 Before we get started, you need to round up a few items you will need for the installation: Non-magnetic screwdriver (if you have screw driver with a reversible Phillips/Flat head bit, all the better. If not, you may need two screwdrivers, depending on your computer case); CDRW or DVD drive itself; Vacant 5.25 inch drive bay. You can also exchange the current CDRW or DVD drive in your machine for the new one if no additional drive bay is available.
Step 2 Power off your computer and unplug it from the wall! You should never work inside your computer with the power turned on or the power cable connected to any socket because you could seriously damage your computer or yourself! Step 3 If you are installing a DVD or CDRW drive into an empty 5.25 inch drive bay, you will need to remove the front cover (often called a bezel) from the drive bay to gain access. The method used to remove the drive bay cover will vary
depending on the brand of your case. Some are removed by pressing tabs on the front of the computer case, and some have screws that need to be removed to get the cover off the case. Refer to your case Users Manual for detailed instructions for this step. You will also want to remove both side panels from your computer at this time. Step 4 The next step in the installation of your new drive is to prepare the drive itself by setting the drive jumper (A jumper is a small, elastic-encased metal bridge that closes an electrical circuit). On the rear of your drive near where the IDE cable connects, you will see a plastic cap placed over two pins that controls how your computer sees your new drive. If you already have another CDRW or DVD drive in your system, the new drive should be jumpered, or set, as the slave. The existing drive is likely set to be the master drive already. If the existing drive is not set to master, it will be set to Cable Select and setting the new drive as slave will still be the best option. Typically, there is a key to the jumper settings embossed on the back of the drive. Improperly setting the jumpers could cause problems with your computer recognizing your drive properly. IMPORTANT: You can only have one master drive on each IDE channel. Step 5 Once your jumper is set, you are ready to install the CDRW or DVD drive into your computer case. This is a simple task! Slide the new drive into the vacant 5.25 bay and get ready to secure it in the next step.
Step 6 Now that you have placed the drive into your case, you are ready to secure it with screws or some type of retention mechanisms. Most drives are secured to the case by using 4 screws which hold the drive in place. If your drive did not include screws for securing it, you should be able to get some appropriate screws at a local computer or hardware store. Be sure not to overtighten the screws. They should be tight enough to hold the drive in place, but not so tight as to strip the screws/sockets.
Some computer cases are tool-less, meaning screws are not used to secure the drive to the case. Cases that are tool-less utilize different methods of securing drives such as retention brackets and snap fittings. If your case is a tool-less design, you will want to look at the Users Manual for instructions on securing a drive. Step 7 Now that the drive is installed and secured to the case, you need to identify and locate the required cables to connect the drive to your motherboard and power it. All internal CDRW and DVD drives need the same two cables to connect them to your system: an IDE Cable and a 4-pin Molex Power Cable. If you dont have an available Molex or IDE connector inside your computer, you will likely want to look for a more experienced geek to assist you. In this case, you will need to install additional components to accommodate your new CDRW or DVD drive. When possible, it is always best to connect your CDRW or DVD drive to an IDE cable that is not connected to your hard drive. Most all motherboards have two IDE connectors and you may need an additional IDE cable. Connecting the CDRW or DVD drive to the same cable as your hard drive will work, but it is not the ideal configuration. Step 8 Assuming you had the Molex and IDE connectors available, you are now ready to connect them to your new drive. Connecting the two cables is easy since both the cables are keyed. To put it another way: keyed connectors can only be installed one way. You cannot connect them improperly.
Step 9 After you have connected both the Molex and IDE cables to your new drive, you are ready to reinstall the side panels onto your computer. Once your side panels are reinstalled, you can power up your computer.
Step 10 The last step in installing your DVD or CDRW drive is to install the software needed for you to burn music or movies, assuming the drive you have installed is a CDRW or DVD burner. Typically, no drivers are required for the drive to be recognized by your computer. You are now finished installing your CDRW or DVD drive into your computer happy listening, viewing, and/or burning!
Tech Tip 74 - Installing a Power Supply By Shane McGlaun Installing a power supply is not a hard job. However, a new geek can be intimidated the first time they open a case and see the mass of wires and ominous-looking components. If you follow these simple steps, you can replace your power supply unit (PSU) in no time and with little effort. It is also important to note before we get started that a computer PSU stores power even when unplugged and you should never open the case of the PSU itself. Step 1 You will need to gather the tools you need for this task. Namely, you need a Phillips screw driver to remove the screws holding your side panels, and old PSU (assuming you are upgrading an existing PSU). You may also need a set of wire cutters, a knife, or scissors to remove wire ties that may be securing the cabling from your PSU to the chassis or case of your computer. Step 2 After you get your tools together, you will start by turning your computer off and unplugging the power cable from both the wall and the back of the power supply. Then, remove the side panels from your computer.
Step 3 Once your computer is powered down and the side panels are removed, you will want to carefully start removing the power connectors from the motherboard, hard drives, and the optical drives. Sometimes, Molex connectors can be stubborn to remove, so take your time and you will limit the risk of damaging your system. Depending on how old your system is, you may have either a 24-pin or 20-pin main power connector to the motherboard, as well as a 4-pin P4 power connector to the motherboard. Some boards also require Molex connectors to be connected to the motherboard for use. In this step, simply disconnect every power dongle coming from your PSU to a component of your system. Step 4 Once you have all the power connectors disconnected, you are ready to use your screwdriver and remove the four screws that are securing your PSU to the case itself. Beware in this step. Depending on the design of your case, once you remove the screws holding the PSU to the case, they could fall into your computer case and cause damage to you or worse, damage your beloved computer. Be sure to keep a hand on the PSU as you remove the screws. Step 5 Once the screws are removed, you are ready to remove the PSU from your case. Take care as you do this step in case you overlooked a cable that still may be plugged in. The cables from the PSU also tend to get tangled up on things inside the computer. You dont want to cause any damage to your other components by hastily removing the old PSU from your PC case. If you find that any of the cables are secured to the PC case with wire ties, use your wire cutters or knife to remove them.
Step 6 Once you have the old PSU out of your case, you are ready to bolt your new PSU into your PC case. You can reuse the same screws that held your old PSU in place, though most new PSUs include new screws as well. Be sure that you align the new PSU in the case so you can fasten all four screws snugly.
Step 7 Once you have your new PSU secured to the case of your PC, you are ready to reconnect the power dongles to your hard drives, optical drives, and to your motherboard. Many newer PSUs have a 24-pin main board power connector. If you are replacing the PSU on a system that uses a 20-pin motherboard connector, it can get a bit confusing at this point. Every quality PSU I have seen allows four of the pins on the 24-pin main power connector to be removed to be compatible with an older 20-pin motherboard. If your PSU does not have the ability to have four of the pins from the 24pin main power connector removed, and your board requires 20-pin main power, you will either need to get a different PSU or a different motherboard to continue. Many newer PSUs also use an 8-pin power connector in addition to the 24-pin main power connector to supply the motherboard. Every PSU I have used offers an adapter to change the 8-pin main board power dongle to a 4-pin P4 power dongle. Be sure to use this if your board requires a 4-pin P4 connector and your PSU has the newer 8-pin connector built on. If you dont plug in the additional 8-pin or 4-pin power connectors, your system will likely not boot up. Step 8 Once you have all of your drives reconnected, as well as the power to your motherboard reconnected, you are ready to plug your computer back in and power it up. If it powers up and boots to Windows, and all of your optical drives work, you have successfully
replaced your PSU. If it wont power up, first check that you have the power rocker switch, commonly found on the back of your PSU on the rear of the case, turned to the ON position. If that switch is ON, but you still cant power your system up, check that youve connected all required power connectors to your motherboard. If your system powers up, but you get an error message that no operating system is found (or similar message), check that you remembered to connect the power to your hard disk drives. If you check all of these things and your system still wont boot up, it is time to call in a superior geek to get you up and running.
Step 9 If everything powers up, your PC boots to Windows and your optical drives work, you are ready to put the side panels back on your system and you are done. Congratulations!
Tech Tip 75 Parents Primer to Children and the Internet By Kimmy Powell Lets face it. Technical literacy is a fact of life, and our children know more about whats happening on the Web than most adults do. As their world expands, and they become familiar with the resources the Net has to offer, we are faced with a dilemma: in an environment where anything goes, how do we monitor what our children see and do when they are more proficient than we are with Cyberspaces offerings? Tried and true parenting methods should work with 21st century Cyberspace as they would in matters of the real world. By becoming active participants in a childs learning, parents can actually help prevent unsolicited behaviors and risks their child may encounter online, and help kids reap the greatest benefit from their experience in this medium. Benefits and Risks of Web Surfing Weve all heard the tales of child predators lurking about the Internet from the news: stories of meetings between under-aged kids and much older adults. Were also aware of the existence of indecent materials littering the Internet. While these occurrences are rare, they are nevertheless a part of Cyberspace, part of the world of e-mails, chat rooms, instant messaging, and forums. Kids may run a gauntlet of unpleasant experiences such as hostile language, online bullying, and gossip. They can expose your computer to malicious software and viruses, be targeted by hackers when they download safe attachments and files, create potential financial and legal
problems when they disclose a parents credit card number, or release too much personal information. The Internet remains unregulated and open to all. While these risks may deter you from allowing your child the luxury of Internet access, the benefits could potentially be much greater. Cyberspace is a valuable resource if kids can learn to filter content appropriately. Not only can a kid get ideas for science projects and learn about history from a variety of sources, but kids can learn about any subject, usually in much greater detail with much more live data than books can offer. The Internet can help children learn problemsolving skills, improve writing skills, teach programming, and they can learn to analyze the pros and cons of the things they find in their research. These are important tools your kids can use in the future in an economy turning increasingly towards information services. On the social scene, Cyberspace provides a great environment for kids to meet others of like mind and interest. Unlike school yards and classrooms, where kids are pigeonholed into being of a certain class and stature, there is no face-to-face on the Internet. A child can pretty much be who he or she is without feeling the same level of alienation as they may in the classroom. The key to making Cyberspace educational rather than criminal is active parent participation and involvement. Set the ground rules for online time. Its the same as with any other real world parenting problem. Establishing Rules of the Road To prevent the risks of web surfing from becoming a reality, take responsibility for monitoring your childs use of the Internet. Understand and be knowledgeable about the medium itself and decide what is appropriate for your child. Make the Internet a family activity. Create guidelines of acceptable activities. Create a comfort zone between yourself and your child where communication is possible. If your kids find something inappropriate while surfing, they should feel comfortable coming to you with their concerns without the fear of reprisal. Report suspicious activity or pornographic materials to CyberTipline or the National Center for Missing and Exploited Children.