Results
#1. Within a CA (Certificate Authority) hierarchy, what specific role does a subordinate CA fulfill?
✅ Correct Answer:
Subordinate CAs provide control over certificate issuance while avoiding the cost of being a root CA.
In a PKI (Public Key Infrastructure) hierarchy:
The root CA is the top-level authority and is highly protected, often kept offline.
A subordinate CA (also called an intermediate CA) is trusted because it is signed by the root CA.
Subordinate CAs are used to issue and manage certificates for users, servers, or other entities.
This design:
Limits exposure of the root CA
Allows delegation of certificate issuance
Improves scalability and management
Avoids the operational and security risks of using the root CA for day-to-day certificate issuance
#2. What immediate actions should Selah take in response to her organization’s recent breach, which resulted in the exposure of the private keys for their certificates?
What to Do When Certificate Private Keys Are Compromised
Digital certificates are a critical part of secure communication on the internet. They rely on public and private key pairs to establish trust and encrypt data. If a certificate’s private key is exposed, that certificate must be considered fully compromised.
Understanding the correct response is essential for real-world security and for CompTIA Security+.
Why Private Key Exposure Is Critical
A private key must remain secret. If an attacker gains access to it, they can:
-
Impersonate the legitimate website or service
-
Perform man-in-the-middle (MITM) attacks
-
Decrypt encrypted traffic
-
Sign malicious content
Once a private key is exposed, trust is permanently broken.
The Immediate Correct Action: Certificate Revocation
The first and most important action after private key exposure is to revoke the affected certificates.
Certificate revocation:
-
Marks the certificate as no longer trusted
-
Prevents clients from accepting it
-
Stops attackers from using the compromised certificate
Revocation information is published using:
-
Certificate Revocation Lists (CRLs)
-
Online Certificate Status Protocol (OCSP)
Browsers and systems check these sources to confirm whether a certificate is still valid.
Why Revocation Must Happen Before Replacement
Some administrators mistakenly think replacing certificates is enough. This is incorrect.
If certificates are replaced without revocation:
-
Old compromised certificates remain valid
-
Attackers can still exploit them
-
Clients may continue trusting them
Revocation ensures the compromised certificates are explicitly rejected.
Why Other Responses Are Incorrect
-
Self-signed certificates are not trusted and do not fix compromise
-
Wildcard certificates expand risk instead of reducing it
-
Changing hostnames does not invalidate old certificates
None of these actions remove trust from a compromised certificate.
Proper Certificate Incident Response Flow
When private keys are exposed, the correct sequence is:
-
Revoke the compromised certificates
-
Publish revocation status (CRL / OCSP)
-
Generate new key pairs
-
Reissue new certificates
-
Deploy and verify new certificates
Skipping revocation leaves a security gap.
#3. To address Charles’s objective of minimizing the potential impact of compromised credentials, which security control from the following options is MOST suitable for achieving this goal?
✅ Correct Answer:
Zero trust
Why Zero Trust Is the MOST Suitable Control
Charles’s objective is to minimize the potential impact of compromised credentials.
This wording is critical for Security+ questions.
Zero Trust is a security model built on the assumption that:
Credentials will eventually be compromised.
Instead of trusting users after login, zero trust:
-
Continuously verifies identity and context
-
Limits access using least privilege
-
Segments access to reduce lateral movement
-
Revalidates users, devices, and sessions
If credentials are stolen, zero trust contains the damage by preventing broad or persistent access.
This directly aligns with the goal of minimizing impact, not just preventing compromise.
Why the Other Options Are Less Suitable
❌ Single sign-on (SSO)
-
Reduces password fatigue
-
Improves user experience
-
Increases blast radius if compromised
-
One stolen credential can grant access to multiple systems
➡️ Opposite of minimizing impact
❌ Federation
-
Extends identity trust across organizations
-
Often used with SSO
-
A compromised federated identity can affect multiple environments
➡️ Increases scope of compromise
❌ Multifactor authentication (MFA)
-
Helps prevent credential compromise
-
Does not limit damage once credentials are compromised
-
Does not control lateral movement or ongoing access
➡️ Strong preventive control, but not the best for impact reduction
Security+ Exam Keyword Breakdown
| Keyword in Question | What CompTIA Wants |
|---|---|
| “Minimize impact” | Damage containment |
| “Compromised credentials” | Assume breach |
| “MOST suitable” | Best strategic control |
These point directly to Zero Trust.
Exam Tip 🔐
-
Prevent compromise → MFA
-
Reduce impact after compromise → Zero Trust
-
Simplify login → SSO
-
Cross-org access → Federation
#4. What control type is MOST appropriate to describe the solution that Ben has implemented, which involves deploying a data loss prevention (DLP) tool capable of inspecting data, identifying specific data types, and flagging them for review before sending corresponding emails outside the organization?
✅ Correct Answer:
Preventive
Why Preventive Is the MOST Appropriate Control Type
Ben implemented a Data Loss Prevention (DLP) tool that:
-
Inspects data
-
Identifies specific data types (e.g., PII, PHI, intellectual property)
-
Flags the data for review before emails are sent outside the organization
The key phrase here is “before sending”.
A preventive control is designed to stop an incident from happening. In this case, the DLP solution actively intervenes before sensitive data leaves the organization, thereby preventing data exfiltration.
Even though the tool inspects and flags data, its purpose is to block or stop unauthorized data transfer, which clearly makes it a preventive control.
Why the Other Options Are Incorrect
❌ Detective
-
Detective controls identify or alert after an event occurs
-
Examples: logs, IDS alerts, audits
-
If the email had already been sent and then flagged, this would be detective
➡️ Since the DLP acts before data leaves, it is not primarily detective
❌ Corrective
-
Corrective controls are used after an incident to fix or recover
-
Examples: restoring from backups, patching systems, reimaging devices
➡️ The DLP is not correcting damage; it is stopping it from occurring
❌ Managerial
-
Managerial (administrative) controls include policies, procedures, and risk management
-
Examples: data handling policies, security awareness training
➡️ A DLP tool is a technical control, not a managerial one
Security+ Exam Keyword Breakdown
| Keyword / Phrase | Indicates |
|---|---|
| “Before sending” | Preventive |
| “Flagging for review” | Control enforcement |
| “DLP tool” | Technical control |
| “Stop data exfiltration” | Preventive |
Security+ Exam Tip 🔐
-
Stops the incident → Preventive
-
Detects after it happens → Detective
-
Fixes damage → Corrective
-
Policies & governance → Managerial
When DLP blocks or holds data before transmission, always think Preventive.
#5. In response to Isaac’s concern regarding the vulnerability of short passwords to brute-force attacks if their hashes are compromised, he seeks a technical solution that can enhance the resistance of the hashes against cracking without burdening users with the requirement of longer passwords. What solution can he implement to address this concern?
✅ Correct Answer:
Implement key stretching techniques.
Why This Is the Correct Answer
Isaac’s concern has three important constraints:
-
Short passwords (users won’t be forced to create longer ones)
-
Hashes may be compromised
-
Protection against brute-force / cracking attacks
Key stretching directly addresses all three.
What Key Stretching Does
Key stretching makes password hashes computationally expensive to crack by:
-
Repeating the hashing process many times (thousands or millions of iterations)
-
Slowing down each password guess an attacker makes
-
Dramatically increasing the cost of brute-force and dictionary attacks
Common key-stretching algorithms include:
-
PBKDF2
-
bcrypt
-
scrypt
-
Argon2
Even if attackers obtain the password hashes, each guess takes much longer to compute—making short passwords far more resistant to cracking without changing the user experience.
This is exactly what the question asks for.
Why the Other Options Are Incorrect
❌ Use a collision-resistant hashing algorithm
-
Collision resistance prevents two different inputs producing the same hash
-
Brute-force attacks do not rely on collisions
-
Does not slow down password guessing
➡️ Does not solve the problem described
❌ Implement pass-the-hash algorithms
-
Pass-the-hash is an attack technique, not a defense
-
It allows attackers to authenticate using stolen hashes
-
Makes the situation worse, not better
➡️ Completely incorrect and dangerous
❌ Encrypt passwords rather than hashing them
-
Passwords should never be encrypted
-
Encryption is reversible; hashing is not
-
If encryption keys are compromised, all passwords are exposed
➡️ Violates basic password security principles
Security+ Exam Keyword Breakdown
| Keyword in Question | Meaning |
|---|---|
| “Brute-force attacks” | Guessing passwords repeatedly |
| “Hashes are compromised” | Offline attack scenario |
| “Without longer passwords” | No user burden |
| “Enhance resistance” | Slow attackers |
These point directly to key stretching.
#6. Quentin intends to implement a single sign-on system to enable his users to log in to cloud services. Among the following technologies, which one is he MOST LIKELY to deploy?
✅ Correct Answer:
OpenID
Why OpenID Is the MOST Likely Technology
Quentin wants to implement single sign-on (SSO) for cloud services. The key phrases here are:
-
Single sign-on
-
Cloud services
OpenID (specifically OpenID Connect, built on OAuth 2.0) is a modern, cloud-native identity protocol designed specifically to support SSO across web and cloud-based applications.
With OpenID:
-
Users authenticate once
-
Identity is shared securely with multiple cloud services
-
Authentication is handled by an identity provider (IdP) such as Google, Microsoft, Okta, or Azure AD
-
Applications rely on tokens, not passwords
This makes OpenID the most appropriate and widely used solution for cloud SSO.
Why the Other Options Are Incorrect
❌ TACACS+
-
Used for network device administration
-
Common in routers, switches, and firewalls
-
Not used for cloud application SSO
➡️ Designed for device access control, not cloud authentication
❌ LDAP
-
Directory access protocol
-
Used for on-premises authentication
-
Does not provide SSO by itself
-
Not cloud-native
➡️ Often used behind SSO systems, not as the SSO mechanism
❌ Kerberos
-
Ticket-based authentication protocol
-
Common in Windows Active Directory environments
-
Requires tight time synchronization and on-prem infrastructure
-
Not suitable for internet-facing cloud services
➡️ Excellent for internal networks, not cloud SSO
Security+ Exam Tip 🔐
-
Cloud / web SSO → OpenID / OpenID Connect
-
On-prem directory auth → LDAP / Kerberos
-
Network devices → TACACS+
When you see SSO + cloud, think OpenID first.
#7. Within Susan’s organization, which of the following zero-trust control plane components utilizes rules to determine access to a service based on factors such as the security status of users’ systems, threat data, and comparable information?
✅ Correct Answer:
Policy-driven access control
Why Policy-driven access control Is Correct
In a zero-trust architecture, access decisions are not static and are not based solely on user identity. Instead, access is determined by policies that evaluate multiple contextual factors before allowing access to a service.
Policy-driven access control uses rules and policies that can consider:
-
User identity
-
Device security posture (patched, encrypted, compliant)
-
Location and network context
-
Threat intelligence and risk signals
-
Time, behavior, and sensitivity of the resource
Because the question explicitly states that access is determined using rules and factors such as system security status and threat data, this directly maps to policy-driven access control, which is the core decision-making component of zero trust.
Why the Other Options Are Incorrect
❌ Secured zones
-
Secured zones refer to segmented network areas
-
They define where access can occur, not how access decisions are made
-
They do not evaluate device posture or threat data
➡️ This is about segmentation, not decision logic
❌ Adaptive authorization
-
Adaptive authorization focuses on adjusting permissions during a session
-
Often triggered by changes in behavior or risk level
-
It is related, but it relies on policies defined elsewhere
➡️ It consumes policy decisions but does not define the rules themselves
❌ Threat scope reduction
-
This focuses on limiting blast radius and lateral movement
-
Implemented through segmentation and least privilege
-
Does not directly make access decisions based on rules
➡️ Outcome-focused, not policy-evaluation focused
#8. Murali intends to apply a digital signature to a file. Which key does he require to accomplish the signing process?
✅ Correct Answer:
His private key
Why His Private Key Is Required
A digital signature is created using the sender’s private key.
When Murali signs a file:
-
The file is hashed (e.g., using SHA-256).
-
That hash is encrypted with Murali’s private key.
-
The encrypted hash becomes the digital signature.
This proves two things:
-
Authenticity – the file was signed by Murali
-
Integrity – the file was not altered after signing
-
Non-repudiation – Murali cannot deny signing it
Only Murali has access to his private key, which is why it must be used for signing.
Why the Other Options Are Incorrect
❌ The recipient’s public key
-
Used for encryption, not signing
-
Encrypts data so only the recipient can decrypt it
-
Does not prove who created the data
➡️ Used for confidentiality, not digital signatures
❌ His public key
-
Public keys are used to verify signatures, not create them
-
Anyone can have Murali’s public key
➡️ Cannot provide proof of authorship
❌ The recipient’s private key
-
The recipient’s private key is secret to the recipient
-
Murali cannot and should not have access to it
➡️ Makes no sense in the signing process
How Signature Verification Works (Important for Security+)
After Murali signs the file:
-
The recipient uses Murali’s public key to verify the signature
-
If verification succeeds:
-
The file is authentic
-
The file is unchanged
-
Signing = sender’s private key
Verification = sender’s public key
#9. What is the term used to describe the type of obfuscation in which additional hidden information is believed to be concealed within an image found by Michelle in an attacker’s file directory?
✅ Correct Answer:
Steganography
Why Steganography Is the Correct Answer
Steganography is a technique used to hide secret information inside another file, most commonly inside images, audio files, or videos.
In this scenario:
-
Michelle finds an image
-
The image is believed to contain hidden information
-
The file is found in an attacker’s directory
This exactly matches steganography, which attackers often use to:
-
Hide commands
-
Hide malware
-
Hide stolen data
-
Evade detection by security tools
The image looks normal, but extra data is secretly embedded inside it.
Why the Other Options Are Incorrect
❌ Image blocking
-
Refers to preventing images from loading (e.g., in emails or browsers)
-
Does not involve hiding data inside images
➡️ Not related to obfuscation
❌ PNG warping
-
Not a recognized cybersecurity term
-
No defined use in CompTIA Security+
➡️ Invalid option
❌ Image hashing
-
Used to create a fingerprint of an image
-
Helps with integrity checking or duplicate detection
-
Does not hide information inside the image
➡️ Opposite of hiding data
#10. By utilizing public keys, asymmetric encryption helps address which significant challenge related to encryption?
✅ Correct Answer:
Key exchange
Why Key Exchange Is the Correct Answer
One of the biggest challenges in traditional (symmetric) encryption is securely sharing the secret key. If two parties need the same secret key, that key must somehow be exchanged without an attacker intercepting it.
Asymmetric encryption solves this problem by using public and private keys:
-
The public key can be shared openly with anyone
-
The private key is kept secret by its owner
Because data encrypted with a public key can only be decrypted with the corresponding private key, there is no need to secretly exchange a shared key beforehand.
This makes asymmetric encryption ideal for:
-
Secure initial communication
-
Establishing trust over untrusted networks (like the internet)
-
Securely exchanging symmetric session keys (as used in TLS)
Why the Other Options Are Incorrect
❌ Key length
-
Key length affects strength, not key sharing
-
Both symmetric and asymmetric encryption use different key lengths
-
Asymmetric encryption does not exist to solve key size issues
➡️ Not the problem being addressed
❌ Evil twins
-
Evil twin attacks involve rogue wireless access points
-
This is a Wi-Fi security issue, not an encryption key problem
➡️ Unrelated to encryption mechanics
❌ Collision resistance
-
Collision resistance is a property of hashing algorithms
-
It has nothing to do with encryption or key sharing
➡️ Wrong cryptographic concept
#11. What capabilities does a root SSL (TLS) certificate possess?
✅ Correct Answer:
Generate a signing key and use it to sign a new certificate.
Why This Is the Correct Answer
A root SSL/TLS certificate belongs to a Root Certificate Authority (Root CA).
The defining capability of a root CA is that it can:
-
Generate its own key pair
-
Use its private key to sign other certificates, including:
-
Subordinate (intermediate) CA certificates
-
In some cases, end-entity certificates
-
This signing capability is what makes the root CA the trust anchor of the entire PKI hierarchy. All trust flows downward from the root CA because its certificate is self-signed and pre-trusted by operating systems and browsers.
If a root CA signs a certificate, that certificate becomes trusted (as long as the root CA itself is trusted).
Why the Other Options Are Incorrect
❌ Remove a certificate from a CRL
-
CRLs (Certificate Revocation Lists) are published by CAs, but:
-
Certificates are added to a CRL when revoked
-
They are not “removed” to restore trust
-
-
Root CAs may issue CRLs, but “removing a certificate from a CRL” is not a defining capability
➡️ Incorrect understanding of revocation
❌ Authorize new CA users
-
PKI does not work with “CA users”
-
Authorization of users is handled by identity and access management (IAM) systems
-
Root CAs authorize other CAs, not users
➡️ Not a PKI function
❌ Allow key stretching
-
Key stretching is a password-protection technique
-
It is unrelated to SSL/TLS certificates or root CAs
-
Used in password hashing algorithms like bcrypt or PBKDF2
➡️ Completely unrelated concept
#12. In Isaac’s physical penetration test, what objective must he achieve in order to bypass an access control vestibule?
✅ Correct Answer:
He needs to persuade an individual to allow him to follow them through two doors in a row.
Why This Is the Correct Answer
An access control vestibule (often called a mantrap) is a security feature designed to prevent tailgating by using two interlocking doors:
-
Only one door can be open at a time
-
A person must be authenticated twice (once per door)
-
The system ensures one person at a time passes through
To bypass a vestibule during a physical penetration test, Isaac must successfully tailgate through both doors, which means:
-
Convincing someone to let him in through the first door
-
Convincing the same or another person to let him through the second door
Bypassing only one door is not enough, because the second door still blocks access.
Why the Other Options Are Incorrect
❌ Persuade an individual to allow him to follow them through a single door
-
This would work for a normal access-controlled door
-
A vestibule has two doors
-
Passing only one door does not grant access
➡️ Incomplete bypass
❌ Acquire an individual’s access card
-
Having a card may allow access, but the question focuses on bypassing via human behavior
-
Vestibules are specifically designed to defeat tailgating, not card theft
➡️ Not the objective described
❌ Acquire the individual’s access PIN
-
PIN theft is a credential compromise
-
Vestibules may still use additional controls (biometrics, weight sensors)
-
Not required to bypass via tailgating
➡️ Not the correct penetration objective
#13. In Valentine’s quest to identify if an intruder has gained access to a secured file server, which of the following techniques, when combined with a data loss prevention tool, will prove most effective in detecting data exfiltration?
✅ Correct Answer:
A honeyfile
Why Honeyfile Is the Correct Answer
Valentine wants to detect whether an intruder has accessed a secured file server, and the solution must be effective when combined with a Data Loss Prevention (DLP) tool.
A honeyfile is a decoy file placed on a system that:
-
Looks legitimate and valuable (e.g., “Payroll_2026.xlsx”)
-
Should never be accessed or exfiltrated during normal operations
-
Triggers alerts if accessed, opened, copied, or transmitted
When combined with DLP:
-
The DLP tool monitors data movement
-
Any attempt to read, copy, or exfiltrate the honeyfile is immediately flagged
-
This provides high-confidence detection of unauthorized access and data exfiltration
This makes a honeyfile the most effective choice for detecting intruders accessing sensitive files.
Why the Other Options Are Incorrect
❌ Honeynet
-
A honeynet is a network of multiple honeypots
-
Used to study attacker behavior at a large scale
-
Overkill for detecting access to a specific file server
➡️ Too broad and complex for this objective
❌ Honeypot
-
A honeypot is a decoy system
-
Used to lure attackers into interacting with fake servers or services
-
Does not specifically detect file-level access or exfiltration
➡️ Wrong level of granularity
❌ Honeytoken
-
A honeytoken is a fake credential or data value
-
Often used for detecting credential misuse or database access
-
Not specifically designed to detect file server access
➡️ Less precise than a honeyfile in this scenario
#14. Among the provided options, which one is typically not considered a step in the process when a transaction is recorded in a blockchain?
✅ Correct Answer:
The value of the block is determined.
Why This Is the Correct Answer
When a transaction is recorded in a blockchain, there is a well-defined process that focuses on verification, validation, and immutability. However, blockchains do not determine or calculate the “value of a block” as part of the transaction-recording process.
Blocks contain:
-
Transactions
-
A hash of the previous block
-
A timestamp
-
A nonce (in some blockchains)
But they do not assign a monetary or numerical “value” to a block itself. That concept does not exist in standard blockchain mechanics.
Therefore, “The value of the block is determined” is not a valid step.
Why the Other Options ARE Steps in Blockchain Transactions
✅ A transaction history is maintained as part of the blockchain
-
Every block contains a record of transactions
-
Blocks are linked together, creating a permanent transaction history
-
This ensures immutability and traceability
✔️ This is a core blockchain feature
✅ The transaction is sent to a peer-to-peer network of computers
-
Blockchain uses a decentralized peer-to-peer (P2P) network
-
Transactions are broadcast to multiple nodes
-
Nodes independently verify transactions
✔️ Required for decentralization
✅ The transaction is validated using equations
-
Transactions are validated using:
-
Cryptographic hashing
-
Digital signatures
-
Consensus algorithms
-
-
These involve mathematical computations (“equations”)
✔️ Essential for trust and security




Leave a Reply