(Errors are in red. Modifications are in blue. Additions and typos are not color-coded. Page numbers refer to the print book. For e-books, refer to the Chapter and Section.)
Error: Chapter 3, Page 60, “Use a BIOS password” bullet: The term “supervisor” should be “administrator”. (Pg 39 of the Academic Edition – the book with the green cover).
Error: Chapter 7, Page 237, 4th paragraph, 2nd sentence. hosts.txt is incorrect. This should simply read as “hosts”. This file does not use an extension.
Addition:Chapter 12: PIA and PTA were not covered:
It can be difficult for an organization to define what is considered personally identifiable information (PII) and what should be collected and monitored. Analysis of that data, and the systems that contain it, can be made easier by using a PIA and PTA. A privacy impact assessment (PIA) is a process which assists organizations in identifying and minimizing the privacy risks of new projects or policies. The purpose of the privacy threshold analysis (PTA) is to help a company’s departments gauge their system’s information, and determine how to appropriately treat data that has been acquired by the organization.
Addition:Chapter 14, Page 491, Diffie-Hellman section. Groups was not covered.
Diffie-Hellman key exchange uses a group of standardized global unique prime numbers and generators to provide secure asymmetric key exchange. The original specification of IKE defined four of these groups, called Diffie-Hellman groups or Oakley groups. Since then, additional groups have been added.
For example, Group 2 uses a 1024-bit modulus, Group 14 uses a 2048-bit modulus, Group 16 uses a 4096-bit modulus, and Group 18 uses an 8192-bit modulus. As the bit-size increases, so does the processing power required. Newer versions of Diffie-Hellman address this by incorporating elliptic curve cryptography (ECC) which are smaller and operate with less computing power. For example, Group 19 uses a 256-bit elliptic curve and Group 20 uses a 384-bit elliptic curve. Because of the low latency and low power required with ECC, many companies choose to use that within an asymmetric Diffie-Hellman key exchange.
(Pg. 333 in the Academic Edition.)
Error: Page 710, Practice Exam 1 Question 69:
The question should ask for three possible answers, and the answers should be: A, B, and C. The explanation is correct.
How the users are added to the OUs is another matter. To reduce time and effort, groups would most likely be used, but that is somewhat implied when talking about Windows Server administration. Exactly how it is done is not too important for this question, as long as the users are assigned the required roles
Error: Index, Page 752, “Application layer” should be lower-case: “application layer”. Individual OSI layers are not proper nouns and should not be capitalized. (Pg 483 in the Academic Edition.)
The definition for MTBF has morphed a bit over time, and some companies will define it slightly differently. It appears the definition requires an update in the book. Generally MTBF is the average measure in hours between failures. For example, I have a customer that had a RAID array fail three times over the past decade before I replaced it. That’s three failures over 87,600 hours, giving an MTBF of 29,200 hours and a failures per million rate of 34 (if my math is correct). A lot of companies extend the MTBF concept into failures per million to give it a quantitative aspect. Or a company might choose to use both.
For hard drives, the big manufacturers are moving away from MTBF. It makes sense too because generally that was a lab-based number, and therefore, hard to replicate in the real-world. For example, WD is using Component Design Life and Annualized Failure Rate for newer magnetic drives. For flash-based drives I also go by failures per X reads/writes.
One more note: If you are dealing with hard drive failures, or even app, DB, or server failures, to me the more important statistic is how long the failure lasted for. Then, with both metrics, you can figure out your yearly uptime (and potentially your ALE).
There are many security protocols that you should know for the exam. We cover these in Chapter 7. Here we will show some actual case scenarios where you would use some of these protocols.
One use case is for e-mail. Common port usages for protocols such as SMTP (port 25) and POP3 (port 110) are not secure enough for most organizations today. Instead, SMTP should use port 465 and POP3 should use port 995. Both of these secure ports make use of Secure Sockets Layer (SSL) or Transport Layer Security (TLS). This is the case with many secure versions of protocols. For example, a web server that needs to be secured for e-commerce will run HTTPS which again utilizes SSL – or more accurately, TLS. It uses port 443 instead of the more commonly known port 80 for unsecured HTTP transactions. The same holds true for secure file transfer (FTP) connections. Secure FTP sessions will either use FTP Secure (FTPS) which implements SSL/TLS on ports 989 and 990; or Secure FTP (SFTP) which relies on SSH and port 22.
Directory services servers such as Microsoft domain controllers normally utilize LDAP which runs on port 389. However, many companies require that LDAP be secured and so, will implement Secure LDAP on port 636. Connecting domain name resolution (DNS) servers normally still use port 53 for DNS but can be secured better with the use of DNSSEC. Remote access connections to servers and clients normally runs on port 3389 (if using Microsoft Remote Desktop) but this can be modified to use SSL/TLS and work on port 443 or FIPS protocols can be implemented. Time synchronization between all of these servers has historically been accomplished through the Network Time Protocol (NTP). While there is no secure version of this protocol, it can be made more secure by using the latest version of NTP and by utilizing Border Control Protocol (BCP) especially in the context of routing and switching. Many organizations have adopted IPv6 as a way of running more efficient network address allocation. But this also allows for additional security in the form of IPsec.
If security is a concern for voice and video sessions, then an admin should consider using IPsec, or better yet, Secure Real-Time Transfer Protocol (SRTP). SRTP can address the shortcomings of IPsec and it’s built in Session Initiation Protocol (SIP). Subscription services (such as RSS feeds and WordPress membership) can be secured better by making use of HTTPS (SSL/TLS) using a proper certificate, and by running the latest versions of XML, PHP, and MySQL, and the latest versions of software (such as WordPress and RSS feed programs such as Feedly).
One of the main purposes of cryptography is to support confidentiality. To keep data secure and unreadable by others we often use encryption, which is a form of obfuscation – making the data unclear – and ultimately, hiding the meaning of the data. For example, encrypting the contents of an e-mail, or encrypting a web session, so that prying eyes cannot easily understand the data contents. But that is only one leg of the CIA triad. Cryptography can also be used to support integrity, for example through the use of hashing as in digital signatures and file verification. It is also used to support authentication – when people are identified by a system and either allowed or denied access. For instance, passwords in most computer systems are cryptographically hashed.
The key (pun intended) is to make sure that data is still highly available after all of the ciphering has been performed. Encrypting techniques can be taxing on systems. Resources versus security constraints means that a security professional (or team) must carefully weigh proposed encryption methods against the hardware and software resources available to the company. And so, when planning this (or any) security implementation, a balance must be achieved, and should be tested thoroughly before going live. However, if encryption is implemented properly, it can protect data, while keeping it available, and allow for the supporting of non-repudiation, which means that you as a technician have indisputable proof that a user, attacker, or bot did something to data that cannot be denied.
As mentioned, cryptography must be carried out in a way that is not resource intensive. That means it should be able to work well without the need for too much processing and RAM power. All encryption requires that the CPU and RAM calculate and store more data (and more complex data) than they normally would. But there are many devices that run on low amounts of power and don’t have very fast CPUs and RAM, yet still need encryption in today’s security-conscious world. Devices such as low-end smartphones, wearable technology, industrial embedded systems, and so on. The idea is to use (or develop) encryption methods that will work on low power devices and run without creating much latency, yet still be secure. To this end, many companies use the Advanced Encryption Standard (AES) because it is powerful, but not as resource intensive as other available encryption options. Encryption methods should also offer a fairly high level of resiliency and fault tolerance – meaning that they can recover quickly from problems that arise. For example, if a certificate session between a client’s web browser and a web server fails to mutually authenticate, the underlying protocol (such as TLS) should be written in such a way that it will reset, and attempt the connection again.
Practice Engine Errata
Error: Question ID: SY0-501-4-10-054. This question should ask for two correct answers. They are Firewall and VPN.