1/178
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What could be an example of denial-of-service attack?
A. Leapfrog attack
B. Brute force attack
C. Negative acknowledgement (NAK) attack
D. Ping of death
Ping of death
Explanation: The use of Ping with a packet size higher than 65 KB and no fragmentation flag on will cause a denial of service. A brute force attack is typically a text attack that exhausts all possible key combinations. A leapfrog attack, the act of tenting through one or more hosts to preclude a trace, makes use of user ID and password information obtained illicitly from one host to compromise another host. A negative acknowledgement attack is a penetration technique that capitalizes on a potential weakness in an operating system that does not handle asynchronous interrupts properly, leaving the system in an unprotected state during such interrupts.
If an IS auditor observes that individual modules of a system perform correctly in development project tests, what should the auditor recommend to management based on these positive results?
A. Comprehensive integration testing
B. Documentation development
C. Full regression testing
D. Full unit testing
Comprehensive integration testing
Explanation: If individual modules of a system perform correctly in development project tests, the IS auditor should inform management of the positive results and recommend further comprehensive integration testing.
In an enterprise data flow architecture, which layer is concerned with transporting information between the various layers?
A. Application messaging layer
B. Desktop Access Layer
C. Data preparation layer
D. Data access layer
Application messaging layer
Explanation: The application messaging layer is responsible for facilitating communication between different applications at the network level. It handles the routing of messages, ensuring they reach the intended recipient, and manages the flow of data between applications.
Desktop Access Layer: This layer deals with how users access the network and interacts with it. It doesn't handle the transport of data between layers.
Data Preparation Layer: This layer focuses on transforming and preparing raw data for use in the subsequent layers. It doesn't involve the actual transportation of data.
Data Access Layer: This layer provides access to databases and other data sources. It's responsible for retrieving and storing data, not for transporting it between layers.
Which e-commerce model covers transactions between companies and government organizations?
A. B-to-C relationships
B. B-to-E relationships
C. B-to-G relationships
D. B-to-B relationships
B-to-G relationships
Explanation: The correct statement in the context of Business-to-Government (B-to-G) relationships is: Business-to-Government(B-to-G) relationships – covers all the transactions between companies and government organizations. Currently, this category is in its infancy, but it could expand quite rapidly as governments use their own operations to promote awareness and growth of e-commerce. In addition to public procurement, administrations may also offer the option of electronic interchange for such transactions as VAT returns and the payment of corporate taxes. This accurately describes the coverage of transactions between companies and government organizations in the realm of e-commerce.
Which fire suppression system is most suitable for use in a data center environment?
A. Carbon dioxide-based fire extinguishers
B. Wet-pipe sprinkler system
C. FM-200 system
D. Dry-pipe sprinkler system
FM-200 system
Explanation: FM-200 is safer to use than carbon dioxide. It is considered a clean agent for use in gaseous fire suppression applications. A water-based fire extinguisher is suitable when sensitive computer equipment could be damaged before the fire department personnel arrive at the site. Manual firefighting (fire extinguishers) may not provide fast enough protection for sensitive equipment (e.g., network servers).
Which approach restricts users to the functions necessary for performing their duties?
A. Data encryption
B. Disabling floppy disk drives
C. Application level access control
D. Network monitoring device
Application level access control
Explanation: The use of application-level access control programs is a management control that restricts access by limiting users to only those functions needed to perform their duties. Data encryption and disabling floppy disk drives can restrict users to specific functions, but are not the best
What is an estimation technique where the results can be measured by the functional size of an information system based on the number and complexity of input, output, interface, and queries?
A. Time box management
B. Functional Point analysis
C. Gantt Chart
D. Critical path methodology
Functional Point analysis
Explanation: Functional Point Analysis (FPA) is an ISO-recognized method used to measure the functional size of an information system, independent of the technology used for implementation. The unit of measurement is function points, representing the amount of functionality recognized by users. FPA can be utilized for various purposes, including budgeting application development costs, estimating annual maintenance costs, determining project productivity, and assessing software size for cost estimation.
Who or what is ultimately responsible for ensuring the functionality, reliability, and security within the realm of IT governance? Please select the best answer.
A. Business unit managers
B. Data custodians
C. IT security administration
D. The board of directors and executive officers
The board of directors and executive officers
Explanation: While various teams within an organization contribute to IT governance, the ultimate responsibility for ensuring its functionality, reliability, and security lies with the highest level of leadership, including the board of directors and executive officers. They set the overall strategic direction, approve policies, and oversee the implementation of IT governance practices.
Which procedure is best for determining the existence of adequate recovery/restart procedures?
A. Reviewing program documentation
B. Reviewing operations documentation
C. Turning off the UPS, then the power
D. Reviewing program code
Reviewing operations documentation
Explanation: Operations documentation should contain recovery/restart procedures, so operations can return to normal processing in a timely manner. Turning off the uninterruptible power supply (UPS) and then turning off the power might create a situation for recovery and restart, but the negative effect on operations would prove this method to be undesirable. The review of program code and documentation generally does not provide evidence regarding recovery/restart procedures.
Which control would be the most comprehensive in a remote access network with multiple and diverse subsystems?
A. Firewall installation
B. Password implementation and administration
C. Network administrator
D. Proxy server
Password implementation and administration
Explanation: The most comprehensive control in this situation is password implementation and administration. While firewall installations are the primary line of defense, they cannot protect all access and, therefore, an element of risk remains. A proxy server is a type of firewall installation; thus, the same rules apply. The network administrator may serve as a control, but typically this would not be comprehensive enough to serve on multiple and diverse systems.
Which statement incorrectly describes the traditional audit approach compared to the Control self-assessment approach?
A. Traditional approach is a policy driven approach
B. Traditional approach requires limited employee participations.
C. In traditional approach, Staffs at all level, in all functions, are the primary control analyst.
D. Traditional approach assigns duties/supervises staff
In traditional approach, Staffs at all level, in all functions, are the primary control analyst.
Explanation: In a traditional audit, the primary responsibility for control assessment largely lies with the external auditor. They review the company's controls and procedures, with limited involvement from internal staff. While some employees may be interviewed or provide information, they are not actively involved in assessing their own controls. Control Self-Assessment actively engages employees at all levels and in all functions to assess their own controls. This approach leverages the insider knowledge and perspective of staff to identify potential weaknesses and areas for improvement.
Traditional approach is a policy driven approach: This is generally true. Traditional audits often focus on compliance with established policies and procedures. They may use checklists and follow a standardized methodology.
Traditional approach requires limited employee participations: As explained above, traditional audits typically involve minimal employee participation in the control assessment process.
Traditional approach assigns duties/supervises staff: This statement is not necessarily incorrect in itself, as any audit function would involve some degree of supervision and task assignment. However, it doesn't highlight the key difference between traditional and CSA approaches regarding employee involvement in control assessment.
What refers to a symmetric key cipher that operates on fixed-length groups of bits with a consistent transformation?
A. string cipher
B. check cipher
C. block cipher
D. stream cipher
block cipher
Explanation: In cryptography, a block cipher is a symmetric key cipher which operates on fixed-length groups of bits, termed blocks, with an unvarying transformation. A stream cipher, on the other hand, operates on individual digits one at a time.
Which layer to protocol data unit (PDU) mapping is incorrect within the TCP/IP model?
A. Physical layer - bits
B. Application layer - Data
C. Network layer - Frame
D. Transport layer - Segment
Network layer - Frame
Explanation: The correct protocol data unit for network layer is the Packet.
Who is primarily responsible for storing and safeguarding the data?
A. Data User
B. Data Steward / Custodians
C. Data Owner
D. Security Administrator
Data Steward / Custodians
Explanation: In an organization, data roles include Data Owners, Data Custodians or Data Stewards, Security Administrators, and Data Users. Data Owners, typically managers and directors, are responsible for utilizing information to run and control the business. Their security responsibilities encompass authorizing access, updating access rules with personnel changes, and regularly reviewing access rules for the data they manage. Data Custodians or Data Stewards, such as system analysts and computer operators, are tasked with storing and safeguarding data. Security Administrators ensure adequate physical and logical security for IS programs, data, and equipment. Data Users, both internal and external, are the actual consumers of computerized data, with their access authorized by data owners and monitored by security administrators.
Which component of a disaster recovery/continuity plan provides the greatest assurance of recovery after a disaster?
A. User management is involved in the identification of critical systems and their associated critical recovery times.
B. Feedback is provided to management assuring them that the business continuity plans are indeed workable and that the procedures are current.
C. Copies of the plan are kept at the homes of key decision-making personnel.
D. The alternate facility will be available until the original information processing facility is restored.
The alternate facility will be available until the original information processing facility is restored.
Explanation: The alternate facility should be made available until the original site is restored to provide the greatest assurance of recovery after a disaster. Without this assurance, the plan will not be successful.
When evaluating business continuity strategies, why does an IS auditor interview key stakeholders in an organization?
A. adequacy of the business continuity plans.
B. effectiveness of the business continuity plans.
C. clarity and simplicity of the business continuity plans.
D. ability of IS and end-user personnel to respond effectively in emergencies.
clarity and simplicity of the business continuity plans.
Explanation: The IS auditor should interview key stakeholders to evaluate how well they understand their roles and responsibilities. When all stakeholders have a detailed understanding of their roles and responsibilities in the event of a disaster, an IS auditor can deem the business continuity plan to be clear and simple. To evaluate adequacy, the IS auditor should review the plans and compare them to appropriate standards. To evaluate effectiveness, the IS auditor should review the results from previous tests. This is the best determination for the evaluation of effectiveness. An understanding of roles and responsibilities by key stakeholders will assist in ensuring the business continuity plan is effective. To evaluate the response, the IS auditor should review results of continuity tests. This will provide the IS auditor with assurance that target and recovery times are met. Emergency procedures and employee training need to be reviewed to determine whether the organization had implemented plans to allow for the effective response
What does ISO 9126 define as a set of attributes that impact the existence of a set of functions and their specified properties?
A. Maintainability
B. Functionality
C. Reliability
D. Usability
Functionality
Explanation: ISO 9126 defines Functionality as a set of attributes that impact the existence of a set of functions and their specified properties. Functionality refers to the ability of software to provide functions that meet stated or implied needs. It encompasses attributes such as suitability, accuracy, interoperability, security, functionality compliance, and more. These attributes are used to evaluate and assess the extent to which the software fulfills its intended purpose and meets user requirements in terms of its functions and capabilities.
What is the most cost-effective recommendation for reducing the number of defects encountered during software development projects?
A. Require the sign-off of all project deliverables
B. implement formal software inspections
C. increase the time allocated for system testing
D. increase the development staff
implement formal software inspections
Explanation: Early defect detection: Identifying and fixing defects early in the development process is significantly cheaper than fixing them later on. Inspections, when done properly, can catch a large number of issues before they become embedded in the code. Improved quality overall: Formal inspections promote a culture of quality by involving multiple reviewers and improving communication between different team members.
Require the sign-off of all project deliverables: While sign-offs are important for ensuring that deliverables meet requirements, they do not guarantee the absence of defects. Defects can still slip through even with sign-offs if the review process is not thorough.
Increase the time allocated for system testing: Although more testing can find more defects, it is more cost-effective to prevent defects from being introduced in the first place. Increasing testing time only addresses the problems that exist, not the root causes.
Increase the development staff: While a larger team may have more capacity to work on the project, it doesn't necessarily guarantee better quality. If the underlying processes are flawed, more developers will simply produce more flawed code. The focus should be on improving the development process itself, not just adding more people.
What accurately describes the difference between SSL and S/HTTP?
A. S/HTTP works at transport layer where as SSL works at the application layer of OSI model
B. Both works at application layer of OSI model
C. SSL works at transport layer where as S/HTTP works at application layer of OSI model
D. Both works at transport layer
SSL works at transport layer where as S/HTTP works at application layer of OSI model
Explanation: SSL (Secure Sockets Layer) and S/HTTP (Secure Hypertext Transfer Protocol) are both protocols used to provide secure communication over the internet. SSL operates at the transport layer and encrypts data exchanged between a client and a server, while S/HTTP operates at the application layer and secures individual HTTP messages. SSL (or its successor TLS) is more commonly used for securing web traffic, while S/HTTP is less widely adopted.
Describe what the directory system of a database-management system encompasses.
A. The location of data
B. Neither the location of data NOR the access method
C. The location of data AND the access method
D. The access method to the data
The location of data AND the access method
Explanation: The directory system of a database-management system defines the data‘s location and the access method.
What practice should be included in the plan for testing disaster recovery procedures?
A. install locally-stored backup.
B. involve all technical staff.
C. Invite client participation.
D. Rotate recovery managers.
Rotate recovery managers.
Explanation: Recovery managers should be rotated to ensure the experience of the recovery plan is spread among the managers. Clients may be involved but not necessarily in every case. Not all technical staff should beinvolved in each test. Remote or offsite backup should always be used.
In an enterprise data flow architecture, which layer is responsible for data copying, transformation in Data Warehouse (DW) format, and quality control?
A. Data Mart layer
B. Data Staging and quality layer
C. Desktop Access Layer
D. Data access layer
Data Staging and quality layer
Explanation: In an enterprise data flow architecture, the Data Staging and Quality Layer is responsible for data copying, transformation into Data Warehouse (DW) format, and quality control. This layer plays a crucial role in ensuring that only reliable data is incorporated into the core Data Warehouse. It must also address challenges presented by operational systems, such as changes to account number formats and the reuse of old accounts and customer numbers.
Which object-oriented technology characteristic allows for enhanced data security?
A. Dynamic warehousing
B. Polymorphism
C. inheritance
D. Encapsulation
Encapsulation
Explanation: Encapsulation is a fundamental concept in object-oriented programming that combines data and methods within a class or object, hiding the internal details and providing controlled access to the object‘s properties and behaviors. It promotes the principle of data hiding, which means that data should be accessed and modified only through predefined methods or interfaces.
What compensating control would be suitable when there are segregation of duties concerns between IT support staff and end users?
A. Performing background checks prior to hiring IT staff
B. Reviewing transaction and application logs
C. Locking user sessions after a specified period of inactivity
D. Restricting physical access to computing equipment
Reviewing transaction and application logs
Explanation: Only reviewing transaction and application logs directly addresses the threat posed by poor segregation of duties. The review is a means of detecting inappropriate behavior and also discourages abuse, because people who may otherwise be tempted to exploit the situation are aware of the likelihood of being caught. Inadequate segregation of duties is more likely to be exploited via logical access to data and computing resources rather than physical access.
Which control method is effective in safeguarding confidential data stored on a personal computer (PC)?
A. Personal firewall
B. File encryption
C. File encapsulation
D. Host-based intrusion detection
File encryption
Explanation: File encryption is an effective control measure for safeguarding confidential data residing on a personal computer (PC) or other devices.
Which device in Frame Relay WAN technique is generally customer-owned and provides connectivity between the company‘s own network and the frame relay network?
A. DTE
B. DLE
C. DME
D. DCE
DTE
Explanation: DTE (Data Terminal Equipment) refers to the devices on the customer's network, like routers, that connect to the Frame Relay network.
DCE (Data Circuit-Terminating Equipment): This is the device on the service provider's network that manages the access to the Frame Relay network.
DME (Data Media Equipment): This term is not commonly used in the context of Frame Relay. It sometimes refers to devices involved in data transmission over physical media, but not specifically related to Frame Relay.
DLE (Data Link Equipment): This is also not a standard term used in Frame Relay discussions.
What is necessary to ensure the viability of a duplicate information processing facility?
A. The workload of the primary site is monitored to ensure adequate backup is available.
B.The site is near the primary site to ensure quick and efficient recovery.
C. The site contains the most advanced hardware available.
D. The hardware is tested when it is installed to ensure it is working properly.
The workload of the primary site is monitored to ensure adequate backup is available.
Explanation: Resource availability must be assured. The workload of the site must be monitored to ensure that availability for emergency backup use is not impaired. The site chosen should not be subject to the same natural disaster as the primary site. In addition, a reasonable compatibility of hardware/software must exist to serve as a basis for backup. The latest or newest hardware may not adequately serve this need. Testing the hardware when the site is established is essential, but regular testing of the actual backup data is necessary to ensure the operation will continue to perform as planned.
Which step of PDCA (Plan-Do-Check-Act) establishes the necessary objectives and processes to deliver expected results?
A. Plan Stage
B. Check stage
C. Act stage
D. Do stage
Plan Stage
Explanation: The PDCA (Plan-Do-Check-Act) cycle is a management method used for continuous improvement in business processes and products. It involves establishing objectives and processes, implementing plans, studying actual results, comparing them to expected results, requesting corrective actions, and making improvements based on analysis. The cycle is iterative and aims to deliver targeted improvements by refining plans and execution in each iteration. It provides a structured approach to achieve better results and meet goals through continuous evaluation and adjustment.
What is a tool that can be used to simulate a large network structure on a single computer?
A. honeytrap
B. honeytube
C. honeynet
D. honeymoon
honeynet
Explanation: A honeypot is a decoy system designed to mimic a real network, allowing attackers to interact with it and reveal their tactics while being monitored. A "honeynet" is essentially a network of honeypots, creating a larger simulated environment on a single computer.
Which technique does malware use to append a section of itself to files, similar to how file malware appends themselves?
A. Immunizer
B. Scanners
C. Active Monitors
D. Behavior blocker
Immunizer
Explanation: Defend against malware by appending sections of themselves to files - sometime in the same way Malware append themselves. Immunizers continuously check a file for changes and report changes as possible malware behavior. Other type of Immunizers are focused to a specific malware and work by giving the malware the impression that the malware has already infected to the computer.
What is the most fundamental step in preventing virus attacks?
A. Inoculating systems with antivirus code
B. Adopting and communicating a comprehensive antivirus policy that outline why solutions should be deployed and users‘ responsibility
C. Implementing antivirus content checking at all network-to-Internet gateways
D. Implementing antivirus protection software on users‘ desktop computers
Adopting and communicating a comprehensive antivirus policy that outline why solutions should be deployed and users’ responsibility
Explanation: The most fundamental step in preventing virus attacks is adopting and communicating a comprehensive antivirus policy, upon which all other antivirus prevention efforts rely.
What is the greatest cause for concern when data is sent over the Internet using the HTTPS protocol?
A. A symmetric cryptography is used for transmitting data
B. The implementation of an RSA-compliant solution
C. Presence of spyware in one of the ends
D. The use of a traffic sniffing tool
Presence of spyware in one of the ends
Explanation: Encryption using secure sockets layer/transport layer security (SSL/TLS) tunnels makes it difficult to intercept data in transit, but when spyware is running on an end user‘s computer, data are collected before encryption takes place.
What is the correct sequence of Business Process Reengineering (BPR) benchmarking process?
A. Plan, Research, Observe, Analyze, Adopt and Improve
B. Plan, Observe, Research, Analyze, Adopt and Improve
C. Plan, Research, Analyze, Observe, Adopt and Improve
D. Observe, Plan, Research, Analyze, Adopt and Improve
Plan, Research, Observe, Analyze, Adopt and Improve
Explanation: The correct sequence for Business Process Reengineering (BPR) benchmarking is PLAN, RESEARCH, OBSERVE, ANALYZE, ADOPT, and IMPROVE. BPR involves six fundamental steps according to ISACA: Envision, Initiate, Diagnose, Redesign, Reconstruct, and Evaluate. The Envision phase focuses on visualizing the need for change, estimating Return on Investment (ROI), and developing a preliminary project plan. The Initiate phase sets goals, plans the collection of detailed evidence, and establishes a formal project plan. The Diagnose phase involves documenting existing processes, reviewing process steps, and collecting data to understand the current state. The Redesign phase utilizes evidence from the diagnostic phase to develop a new process through planning iterations, which is then reviewed and approved. The Reconstruct phase involves implementing the new process, either in parallel, modular changes, or complete transition, with deliverables including a conversion plan and formal approval. Finally, the Evaluate phase monitors the reconstructed process, compares actual performance to the original forecast, identifies lessons learned, and implements continuous improvement measures. Benchmarking is a crucial tool in BPR, involving planning, researching, observing, analyzing, adapting, and improving processes based on performance data, customer input, and external benchmarks. This methodology ensures the alignment of processes with strategic goals and ongoing adaptation to new requirements and opportunities.
Which of the following PKI elements offers detailed guidelines for handling a compromised private key?
A. Certification practice statement (CPS)
B. Certificate revocation list (CRL)
C. PKI disclosure statement (PDS)
D. Certificate policy (CP)
Certification practice statement (CPS)
Explanation: The CPS is the how-to part in policy-based PKI. The CRL is a list of certificates that have been revoked before their scheduled expiration date. The CP sets the requirements that are subsequently implemented by the CPS. The PDS covers critical items such as the warranties, limitations and obligations that legally bind each party.
In an enterprise data flow architecture, which layer is concerned with the assembly and preparation of data for loading into data marts?
A. Data Mart layer
B. Data access layer
C. Data preparation layer
D. Desktop Access Layer
Data preparation layer
Explanation: In an enterprise data flow architecture, the Data Preparation Layer is concerned with the assembly and preparation of data for loading into data marts. The usual practice in this layer is to pre-calculate the values that are loaded into OLAP (Online Analytical Processing) data repositories to increase access speed.
In an enterprise data flow architecture, which layer represents subsets of information from the core data warehouse?
A. Data access layer
B. Presentation layer
C. Desktop Access Layer
D. Data Mart layer
Data Mart layer
Explanation: In an enterprise data flow architecture, the Data Mart Layer represents subsets of information from the core data warehouse. Data marts are selected and organized to meet the needs of a particular business unit or business line. Data marts can be relational databases or some form of online analytical processing (OLAP) data structure.
Which dynamic interaction of a Business Model for Information Security (BMIS) is the appropriate place to introduce possible solutions such as feedback loops, alignment with process improvement, and consideration of emergent issues in system design life cycle, change control, and risk management?
A. Emergence
B. Enabling and support
C. Governing
D. Culture
Emergence
Explanation: The emergence dynamic interconnection (between people and processes) is a place to introduce possible solutions such as feedback loops; alignment with process improvement; and consideration of emergent issues in system design life cycle, change control, and risk management. 5. Human factors?The human factors dynamic interconnection represents the interaction and gap betweentechnology and people and, as such, is critical to an information security program.
To ensure the preservation of current and critical information within backup files, what should organizations use off-site storage facilities for?
A. Redundancy
B. Confidentiality
C. Concurrency
D. Integrity
Redundancy
Explanation: Organizations should use off-site storage facilities for backup files to ensure the preservation of current and critical information in the event of disasters, data loss, or other emergencies. Off-site storage serves as a secure and geographically separated location where backup copies of data are stored. This practice is essential for business continuity and disaster recovery planning. In case of on-site disasters, such as fires, floods, or other catastrophic events, having backup copies stored off-site ensures that critical data remains accessible, allowing organizations to recover and resume operations quickly. Off-site storage provides a safeguard against the risk of losing both primary and backup data due to a localized incident, offering a resilient and redundant approach to data protection.
When an application is modified, what should be tested to determine the full impact of the change?
A. Mission-critical functions and any interface systems with other applications or systems
B. All programs, including interface systems with other applications or systems
C. Interface systems with other applications or systems
D. The entire program, including any interface systems with other applications or systems
The entire program, including any interface systems with other applications or systems
Explanation: When an application is modified, it is essential to conduct regression testing to determine the full impact of the change. Regression testing involves retesting the modified application to ensure that the recent changes have not adversely affected the existing functionalities. This type of testing helps identify any unintended side effects or introduced defects resulting from the modifications. By executing a comprehensive set of test cases that cover various aspects of the application, including both the modified and unaffected areas, organizations can verify the system‘s overall integrity and functionality. Regression testing provides confidence that the application remains stable and reliable after changes and helps prevent the introduction of new issues into the software.
How would you best characterize “worms“?
A. Malicious programs that masquerade as common applications such as screensavers or macro- enabled Word documents
B. Malicious programs that can run independently and can propagate without the aid of a carrier program such as email.
C. Malicious programs that require the aid of a carrier program such as email
D. Programming code errors that cause a program to repeatedly dump data
Malicious programs that can run independently and can propagate without the aid of a carrier program such as email.
Explanation: A worm is a type of malware that can replicate itself and spread across a network without needing a user to actively interact with it, unlike a virus that might require a user to open a malicious file
In an enterprise data flow architecture, which layer captures all data of interest to an organization and organizes it for reporting and analysis?
A. Desktop access layer
B. Data preparation layer
C. Data access layer
D. Core data warehouse
Core data warehouse
Explanation: In an enterprise data flow architecture, the layer that captures all data of interest to an organization and organizes it for reporting and analysis is the Core Data Warehouse Layer. This layer is where all the data of interest is collected and structured to assist in reporting and analysis. The Core Data Warehouse is typically implemented as a large relational database and is designed to support various forms of inquiries, including drilling up and down, drill across, and historical analysis.
What can be considered the simplest and most cost-effective type of firewall?
A. hardware firewall
B. packet filter
C. stateful firewall
D. PIX firewall
packet filter
Explanation: The simplest and almost cheapest type of firewall is a packet filter that stops messages with inappropriate network addresses. It usually consists of a screening router and a set of rules that accept or reject a message based on information in the message header.
Who is responsible for providing adequate physical and logical security for IS programs, data, and equipment?
A. Data Owner
B. Data Custodian
C. Security Administrator
D. Data User
Security Administrator
Explanation: In an organizational context, various roles contribute to ensuring effective information security. Data Owners, typically managerial personnel, hold responsibility for utilizing information to manage and control business operations. Their security duties involve authorizing access, updating access rules with personnel changes, and regularly reviewing access rules. Data Custodians or Stewards, including IS personnel like system analysts and computer operators, focus on securely storing and safeguarding data. Security Administrators play a crucial role in providing both physical and logical security for information systems, programs, data, and equipment. Finally, Data Users, encompassing internal and external user communities, are the actual consumers of computerized data, with their access authorized by Data Owners and monitored by Security Administrators.
While implementing an invoice system, Lily has introduced a database control that verifies if new transactions have already been entered. Which control has Lily implemented?
A. Reasonableness check
B. Range Check
C. Existence check
D. Duplicate Check
Duplicate Check
Explanation: Various checks play a pivotal role in ensuring the accuracy and integrity of data within information systems.
Sequence Checks are crucial for maintaining order and identifying irregularities in control numbers.
Limit Checks are essential to prevent data from exceeding predetermined values, ensuring, for example, that payroll checks do not surpass a specified maximum amount.
Validity Checks are fundamental in examining data against predefined criteria, ensuring it adheres to specific standards such as acceptable codes for marital status.
Range Checks verify that data falls within defined ranges, preventing issues like invalid product type codes.
Reasonableness Checks contribute to data quality by matching inputs to reasonable limits, useful for detecting anomalies like unrealistic quantities in orders.
Table Lookups ensure data compliance with predefined criteria in computerized tables, contributing to consistency.
Existence Checks verify the correctness of entered data against predefined criteria, such as valid transaction codes.
Key Verification enhances data accuracy by repeating the keying process and comparing it to the original, minimizing input errors.
Check Digits provide a mathematical means to detect errors and alterations in data, enhancing its reliability.
Completeness Checks ensure that fields contain meaningful data rather than blanks or zeros, contributing to the overall quality of records.
Duplicate Checks prevent redundancy by ensuring new transactions do not duplicate previously input data.
Logical Relationship Checks ensure that data relationships align with predefined logical conditions, enhancing the overall integrity of the information system. Each of these checks serves a distinct purpose in maintaining data accuracy and reliability within information systems.
What is the greatest risk when proper management of storage growth is neglected in a critical file server?
A. Backup time would steadily increase
B. Backup operational cost would significantly increase
C. Storage operational cost would significantly increase
D. Server recovery work may not meet the recovery time objective (RTO)
Server recovery work may not meet the recovery time objective (RTO)
Explanation: In case of a crash, recovering a server with an extensive amount of data could require a significant amount of time. If the recovery cannot meet the recovery time objective (RTO), there will be a discrepancy in ITstrategies. It‘s important to ensure that server restoration can meet the RTO. Incremental backup would only take the backup of the daily differential, thus a steady increase in backup time is not always true. The backup and storage costs issues are not as significant as not meeting the RTO.
Which layer of the OSI model ensures error-free, sequenced, and lossless message delivery?
A. Session layer
B. Presentation layer
C. Application layer
D.Transport layer
Transport layer
Explanation: The OSI model is a conceptual model that standardizes the internal functions of a communication system by dividing it into abstraction layers. It consists of seven layers: the physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. presentation, and application.
The physical layer handles the transmission and reception of raw bit streams, while the data link layer provides error-free transfer of data frames.
The network layer controls the path that data takes based on network conditions, and the transport layer ensures reliable message delivery.
The session layer allows for session establishment, the presentation layer formats data, and the application layer provides the interface for users to access network resources. Each layer serves the layer above it and is served by the layer below it.
What is the most effective control for managing the usage of universal storage bus (USB) devices?
A. Software for tracking and managing USB storage devices
B. Administratively disabling the USB port
C. Policies that require instant dismissal if such devices are found
D. Searching personnel for USB storage devices at the facility‘s entrance
Software for tracking and managing USB storage devices
Explanation: Software for centralized tracking and monitoring would allow a USB usage policy to be applied to each user based on changing business requirements, and would provide for monitoring and reporting exceptions to management. A policy requiring dismissal may result in increased employee attrition and business requirements would not be properly addressed. Disabling ports would be complex to manage and might not allow for new business needs. Searching of personnel for USB storage devices at the entrance to a facility is not a practical solution since these devices are small and could be easily hidden.
Which protocol, jointly developed by VISA and Mastercard, secures payment transactions in credit card transactions?
A. SSH
B. Secure Electronic Transaction (SET)
C. S/HTTP
D. S/MIME
Secure Electronic Transaction (SET)
Explanation: Secure Electronic Transaction (SET) was a protocol developed by VISA and Mastercard in the late 1990s to provide secure payment transactions over the internet. It aimed to address security concerns associated with online credit card transactions at the time. SET utilized a combination of encryption, digital certificates, and digital signatures to ensure the confidentiality, integrity, and authenticity of payment information during online transactions. It involved multiple parties, including the cardholder, merchant, and payment gateway, to facilitate secure communication and transaction processing.
Which of the following statements about public key infrastructure (PKI) is false?
A. The Certificate authority role is to issue digital certificates to end users
B. The Registration authority (RA) acts as a verifier for Certificate Authority (CA)
C. The Registration authority role is to issue digital certificates to end users
D. Root certificate authority‘s certificate is always self-signed
The Registration authority role is to issue digital certificates to end users
Explanation: The Registration Authority (RA) does not have the role of issuing digital certificates to end users. The primary responsibility of the RA is to verify the identity and authorization of the certificate requester before forwarding the request to the Certificate Authority (CA). The CA is the entity responsible for issuing digital certificates to end users. The RA acts as an intermediary between the requester and the CA, validating the information provided by the requester and ensuring that the CA can trust the identity of the requester before issuing the certificate.
PLEASE NOTE THAT A self-signed certificate means that the certificate is signed by the entity it belongs to, in this case, the Root CA itself. Since the Root CA is the highest level of trust in a Public Key Infrastructure (PKI) hierarchy, its certificate does not need to be signed by any higher authority. Instead, it is self-signed to establish its own trustworthiness as the root of the PKI. Other certificates within the PKI hierarchy are typically signed by intermediate CAs or subordinate CAs, which ultimately trace back to the self-signed certificate of the Root CA.
What is the recommended initial step for an IS auditor to implement continuous monitoring systems?
A. Identify high-risk areas within the organization
B. Establish a controls-monitoring steering committee
C. Document existing internal controls
D. Perform compliance testing on internal controls
Identify high-risk areas within the organization
Explanation: Before implementing continuous monitoring, the auditor needs to understand where the most critical risks are within the organization to focus their efforts.
Which approach reduces the ability of one device to capture packets intended for another device?
A. Routers
B. Firewalls
C. Filters
D. Switches
Switches
Explanation: A switch learns the MAC addresses of devices connected to its ports and directs data packets only to the intended device's MAC address, effectively preventing other devices on the network from eavesdropping on the communication.
Routers: Routers direct packets between different networks, not within a local network like a switch. They can also be used for more complex routing decisions based on IP addresses, but they don't restrict packet capture within a single network.
Firewalls: Firewalls are primarily concerned with security at the network layer, filtering traffic based on rules and policies to prevent unauthorized access. They can help prevent eavesdropping between different networks or from the internet, but they don't necessarily restrict communication within a local network.
Filters: While the term "filter" can have various meanings, in the context of networking, it usually refers to a device that removes specific data from a stream, not a device that prevents packet capture. For example, a network packet filter might remove certain sensitive information from a packet before it is transmitted.
What benefit does using capacity-monitoring software to monitor usage patterns and trends provide to management?
A. The software produces nice reports that really impress management.
B. It allows users to properly allocate resources and ensure continuous efficiency of operations.
C. Support management in allocating resources and ensure continuous efficiency of operations.
D. The software can dynamically readjust network traffic capabilities based upon current usage.
Support management in allocating resources and ensure continuous efficiency of operations.
Explanation: Capacity-monitoring software provides insights into resource usage patterns and trends. This information enables management to make informed decisions about resource allocation, ensuring that resources are used efficiently and effectively. It also helps identify potential bottlenecks or areas where capacity may be insufficient.
The software produces nice reports that really impress management: While reports generated by capacity-monitoring software can be valuable for presenting data and insights to management, the primary benefit is not simply the aesthetics of the reports.
Support management in allocating resources and ensure continuous efficiency of operations: This option is essentially the same as the correct answer, but phrasing it as "support management" rather than directly stating the benefit of resource allocation is less clear.
The software can dynamically readjust network traffic capabilities based upon current usage: While some capacity-monitoring software might have this capability, the core benefit is in providing capacity insights for resource management, not necessarily dynamic network traffic adjustments.
In an enterprise data flow architecture, which layer derives enterprise information from operational data, external data, and nonoperational data?
A. Data access layer
B. Data mart layer
C. Data source layer
D. Data preparation layer
Data source layer
Explanation: In an enterprise data flow architecture, the layer that derives enterprise information from operational data, external data, and nonoperational data is the Data Source Layer. This layer is responsible for collecting data from various sources.
Which of the following is typically a responsibility of the chief security officer (CSO)?
A. Periodically reviewing and evaluating the security policy
B. Executing user application and software testing and evaluation
C. Approving access to data and applications
D. Granting and revoking user access to IT resources
Periodically reviewing and evaluating the security policy
Explanation: The role of a chief security officer (CSO) is to ensure that the corporate security policy and controls are adequate to prevent unauthorized access to the company assets, including data, programs and equipment. User application and other software testing and evaluation normally are the responsibility of the staff assigned to development and maintenance. Granting and revoking access to IT resources is usually a function of network or database administrators. Approval of access to data and applications is the duty of the data owner.
Which of the following functionality is not performed by the application layer of a TCP/IP model?
A. End-to-end connection
B. Data encryption and compression
C. Print service, application services
D. Dialog management
End-to-end connection
Explanation: The OSI (Open Systems Interconnection) model does not explicitly have a layer named End-to-End connection. However, the concept of end-to-end communication is primarily associated with the Transport Layer (Layer 4) of the OSI model.The Transport Layer is responsible for providing end-to-end communication, ensuring that data is reliably and accurately delivered between two devices over a network. It establishes, maintains, and terminates connections, manages flow control, and handles error detection and correction. Two commonly used transport layer protocols are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).
Who is responsible for reviewing the outcomes and deliverables at each phase and ensuring compliance with requirements?
A. Senior Management
B. Project Sponsor
C. Quality Assurance
D. User Management
Quality Assurance
Explanation: In the software development life cycle (SDLC), Quality Assurance personnel play a crucial role in ensuring the project‘s quality by reviewing results and deliverables at each phase. Their objective is to measure the adherence of the project staff to the organization‘s SDLC, confirm compliance with requirements, and provide recommendations for process improvement or additional control points when deviations occur.
Senior Management demonstrates commitment to the project and allocates necessary resources, while User Management assumes ownership of the project, actively participating in various phases, and addressing questions related to software functions, reliability, effectiveness, user-friendliness, data transfer, adaptability, regulatory compliance, and more.
The Project Steering Committee provides overall direction and ensures stakeholder representation, and the Project Manager manages day-to-day activities, adherence to standards, stakeholder expectations, and project costs.
The Project Sponsor provides funding, defines critical success factors, and holds data and application ownership.
The Security Officer ensures effective system controls and consults on security measures throughout the life cycle.
User and System Development Project Teams complete assigned tasks, communicate effectively, and advise the project manager on deviations. This collaborative involvement of different roles ensures a comprehensive and successful software development process.
Which changeover approach is suggested when shifting users from an older system to a newer system on a cutoff date and time?
A. Abrupt changeover
B. Phased changeover
C. Pilot changeover
D. Parallel changeover
Abrupt changeover
Explanation: In the abrupt changeover approach, the newer system is switched over from the older system on a specified cutoff date and time, leading to the discontinuation of the older system once the changeover is complete. Changeover involves shifting users from the existing (old) system to the replacing (new) system and includes four major steps: converting files and programs, test running on a test bed, installing new hardware, operating system, application system, and migrated data, training employees or users in groups, and scheduling operations and test running for the go-live or changeover. The risks associated with changeover include asset safeguarding, data integrity, system effectiveness, change management challenges, and the possibility of duplicate or missing records. The Incorrect Answers provided information about other changeover approaches, such as parallel changeover (running both old and new systems simultaneously before full changeover), phased changeover (breaking the older system into deliverable modules and gradually phasing them out with the new system), and pilot changeover (not a valid changeover type).
What is the main high-level goal for an auditor reviewing a system development project?
A. To ensure that proper approval for the project has been obtained
B. To ensure that programming and processing environments are segregated
C. To ensure that business objectives are achieved
D. To ensure that projects are monitored and administrated effectively
To ensure that business objectives are achieved
Explanation: One of the main high-level goals for an auditor reviewing a systems development project is to ensure the achievement of business objectives. This objective guides all other systems development objectives.
Which layer of the OSI model is responsible for transmitting and receiving the bit stream over a medium or carrier?
A. Transport Layer
B. Network Layer
C. Physical Layer
D. Data Link Layer
Physical Layer
Explanation: The OSI model is a conceptual model that standardizes the internal functions of a communication system by dividing it into abstraction layers. It consists of seven layers: the physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. presentation, and application. The physical layer handles the transmission and reception of raw bit streams, while the data link layer provides error-free transfer of data frames. The network layer controls the path that data takes based on network conditions, and the transport layer ensures reliable message delivery. The session layer allows for session establishment, the presentation layer formats data, and the application layer provides the interface for users to access network resources. Each layer serves the layer above it and is served by the layer below it.
What type of IDS possesses self-learning capabilities and gradually learns the expected behavior of a system?
A. Signature Based IDS
B. Statistical based IDS
C. Host Based IDS
D. Neural Network based IDS
Neural Network based IDS
Explanation: The neural networks-based IDS monitors the general patterns of activity and traffic on the network and creates a database. This is similar to the statistical model but has the added function of self-learning. Signature-based systems are a type of IDS in which the intrusive patterns identified are stored in the form of signatures. These IDS systems protect against detected intrusion patterns. Statistical-based systems need a comprehensive definition of the known and expected behavior of systems. Host-based systems are not a type of IDS, but a category of IDS, and are configured for a specific environment. They will monitor various internal resources of the operating system to warn of a possible attack.
Which of the following measures help prevent an organization‘s systems from participating in a distributed denial-of-service (DDoS) attack?
A. Inbound traffic filtering
B. Recentralizing distributed systems
C. Using access control lists (ACLs) to restrict inbound connection attempts
D. Outbound traffic filtering
Outbound traffic filtering
Explanation: A DDoS attack involves using many compromised systems to flood a target server with requests, effectively overwhelming it and preventing legitimate users from accessing the service. By filtering outbound traffic, an organization can ensure that its own systems cannot be used to send malicious traffic to other targets, thus preventing them from becoming part of a DDoS attack.
An IS auditor reviewing a new application for compliance with information privacy principles should be the MOST concerned with:
A. nonrepudiation
B. collection limitation
C. availability
D. awareness
collection limitation
Explanation: Collection limitation is exactly what it says. It is limiting the amount of PII collected. Information should only be collected if it is needed and when it is needed. It should be retained only as long as necessary.
What is a characteristic of timebox management?
A. Separates system and user acceptance testing
B. Not suitable for prototyping or rapid application development (RAD)
C. Eliminates the need for a quality process
D. Prevents cost overruns and delivery delays
Prevents cost overruns and delivery delays
Explanation: Timebox management, by its nature, sets specific time and cost boundaries. It is very suitable for prototyping and RAD, and integrates system and user acceptance testing, but does not eliminate the need for a quality process.
Who assumes ownership of a systems development project and the resulting system?
A. IT management
B. User management
C. Project steering committee
D. Systems developers
User management
Explanation: User management is responsible for ultimately owning the systems development project and the resulting system because they are the primary users of the application and are responsible for its successful implementation and ongoing use within the organization. They define the business requirements, participate in testing, and provide feedback on the system's functionality.
What is the first step in managing the risk of a cyber attack?
A. estimate potential damage.
B. assess the vulnerability impact.
C. evaluate the likelihood of threats.
D. identify critical information assets.
identify critical information assets.
Explanation: The first step in managing risk is the identification and classification of critical information resources (assets). Once the assets have been identified, the process moves onto the identification of threats, vulnerabilities and calculation of potential damages.
What is a common vulnerability that allows for denial-of-service attacks?
A. Improperly configured routers and router access lists
B. Configuring firewall access rules
C. Lack of employee awareness of organizational security policies
D. Assigning access to users according to the principle of least privilege
Improperly configured routers and router access lists
Explanation: An improperly configured router can be susceptible to attacks that flood it with excessive traffic, overwhelming its resources and preventing legitimate users from accessing the network, which is a classic denial-of-service scenario.
What is the primary measure for securing software and data within an information processing facility?
A. Security committee
B. Reading the security policy
C. Logical access controls
D. Security awareness
Logical access controls
Explanation: To retain a competitive advantage and meet basic business requirements, organizations must ensure that the integrity of the information stored on their computer systems preserve the confidentiality of sensitive data and ensure the continued availability of their information systems. To meet these goals, logical access controls must be in place.
Which layer of the OSI model encapsulates packets into frames?
A. Physical Layer
B. Data Link Layer
C. Transport Layer
D. Network Layer
Data Link Layer
Explanation: The OSI model is a conceptual model that standardizes the internal functions of a communication system by dividing it into abstraction layers. It consists of seven layers: the physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. presentation, and application. The physical layer handles the transmission and reception of raw bit streams, while the data link layer provides error-free transfer of data frames. The network layer controls the path that data takes based on network conditions, and the transport layer ensures reliable message delivery. The session layer allows for session establishment, the presentation layer formats data, and the application layer provides the interface for users to access network resources. Each layer serves the layer above it and is served by the layer below it.
What data validation control did John implement on the marital status field of a payroll record?
A. Range Check
B. Reasonableness check
C. Existence check
D. Validity Check
Validity Check
Explanation: Various checks play a pivotal role in ensuring the accuracy and integrity of data within information systems. A Sequence Check is crucial for maintaining order and identifying irregularities in control numbers.
Limit Checks are essential to prevent data from exceeding predetermined values, ensuring, for example, that payroll checks do not surpass a specified maximum amount.
Validity Checks are fundamental in examining data against predefined criteria, ensuring it adheres to specific standards such as acceptable codes for marital status.
Range Checks verify that data falls within defined ranges, preventing issues like invalid product type codes.
Reasonableness Checks contribute to data quality by matching inputs to reasonable limits, useful for detecting anomalies like unrealistic quantities in orders.
Table Lookups ensure data compliance with predefined criteria in computerized tables, contributing to consistency.
Existence Checks verify the correctness of entered data against predefined criteria, such as valid transaction codes.
Key Verification enhances data accuracy by repeating the keying process and comparing it to the original, minimizing input errors.
Check Digits provide a mathematical means to detect errors and alterations in data, enhancing its reliability.
Completeness Checks ensure that fields contain meaningful data rather than blanks or zeros, contributing to the overall quality of records.
Duplicate Checks prevent redundancy by ensuring new transactions do not duplicate previously input data.
Logical Relationship Checks ensure that data relationships align with predefined logical conditions, enhancing the overall integrity of the information system. Each of these checks serves a distinct purpose in maintaining data accuracy and reliability within information systems.
Which layer of the OSI model controls the dialog between computers?
A. Session layer
B. Presentation layer
C. Application layer
D. Transport layer
Session layer
Explanation: The OSI model is a conceptual model that standardizes the internal functions of a communication system by dividing it into abstraction layers. It consists of seven layers: the physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. presentation, and application. The physical layer handles the transmission and reception of raw bit streams, while the data link layer provides error-free transfer of data frames. The network layer controls the path that data takes based on network conditions, and the transport layer ensures reliable message delivery. The session layer allows for session establishment, the presentation layer formats data, and the application layer provides the interface for users to access network resources. Each layer serves the layer above it and is served by the layer below it.
Which files have appendages that serve as protection against viruses?
A. Behavior blockers
B. Active monitors
C. Cyclical redundancy checkers (CRCs)
D. Immunizers
Immunizers
Explanation: Immunizers defend against viruses by appending sections of themselves to files. They continuously check the file for changes and report changes as possible viral behavior. Behavior blockers focus on detecting potentially abnormal behavior, such as writing to the boot sector or the master boot record, or making changes to executable files. Cyclical redundancy checkers compute a binary number on a known virus-free program that is then stored in a database file. When that program is subsequently called to be executed, the checkers look for changes to the files, compare it to the database and report possible infection if changes have occurred. Active monitors interpret DOS and ROM basic input-output system (BIOS) calls, looking for virus-like actions.
In an enterprise data flow architecture, which layer schedules tasks necessary to build and maintain the Data Warehouse (DW) and populate Data Marts?
A. Warehouse management layer
B. Desktop Access Layer
C. Data preparation layer
D. Data access layer
Warehouse management layer
Explanation: In an enterprise data flow architecture, the Warehouse Management Layer is responsible for scheduling the tasks necessary to build and maintain the Data Warehouse (DW) and populate Data Marts. Additionally, this layer is involved in the administration of security related to data management tasks.
Which internet security threat can compromise integrity?
A. Theft of data from the client
B. A Trojan Horse acting as a browser
C. Eavesdropping on the net
D. Exposure of network configuration information
A Trojan Horse acting as a browser
Explanation: A Trojan Horse is a type of malware that disguises itself as a legitimate program. When a user installs or runs a Trojan Horse, it can gain access to a system or network and perform malicious actions. This includes altering or modifying data, which directly compromises the integrity of information. For example, a Trojan Horse could tamper with financial records, personal information, or important files. It could also modify the code of a browser to redirect users to malicious websites or inject harmful scripts.
Theft of data from the client: While theft of data can certainly have a negative impact on security, it primarily compromises confidentiality, not integrity. Data theft involves stealing information without altering it. Integrity refers to the accuracy and correctness of data.
Eavesdropping on the net: Eavesdropping involves listening in on network communications to capture sensitive data. This threat primarily compromises confidentiality, as the eavesdropper is listening to, not modifying, the data.
Exposure of network configuration information: Exposing network configuration information can create vulnerabilities and make a network more susceptible to attacks. However, this primarily compromises availability and potentially confidentiality, as attackers can then exploit the exposed information to gain unauthorized access. Integrity is affected by the modification of data, not its exposure.
Which type of network service is used by network computers to obtain IP addresses and other parameters such as the default gateway and subnet mask?
A. DNS
B. Directory Service
C. Network Management
D. DHCP
DHCP
Explanation: The type of network service used by network computers to obtain IP addresses and other parameters such as the default gateway and subnet mask is DHCP (Dynamic Host Configuration Protocol). DHCP automates the process of IP address assignment and simplifies network configuration for devices on a network.
What is one of the most common methods used for distributing spyware?
A. as a device driver.
B. as an Adware.
C. as a virus.
D. as a trojan horse.
as a trojan horse.
Explanation: One of the most common ways that spyware is distributed is as a Trojan horse, bundled with a piece of desirable software that the user downloads off the Web or a peer-to-peer file-trading network. When the user installs the software, the spyware is installed alongside.
Which approach uses a prototype that can be continuously updated to meet changing user or business requirements?
A. PERT
B. GANTT
C. Function point analysis (FPA)
D. Rapid application development (RAD)
Rapid application development (RAD)
Explanation: RAD emphasizes iterative development, where a working prototype is repeatedly refined based on user feedback, allowing for flexibility and adaptation to evolving needs.
PERT (Program Evaluation and Review Technique): A project management method focused on defining tasks, estimating their duration, and identifying critical paths, but not primarily on continuous prototyping and user feedback.
GANTT: A visual tool used to track project progress and timelines, not designed for dynamic prototype refinement based on user input.
Function point analysis (FPA): A method for measuring software complexity based on its functionalities, not directly related to iterative prototyping and user feedback.
What is a concern when data is transmitted through Secure Sockets Layer (SSL) encryption on a trading partner‘s server?
A. Data might not reach the intended recipient.
B. Messages are subjected to wiretapping.
C. The communication may not be secure.
D. The organization does not have control over encryption.
The organization does not have control over encryption.
Explanation: While SSL/TLS provides robust encryption, if an organization does not have control over the encryption keys used on their trading partner's server, they cannot fully verify the security of the communication. This means a malicious actor could potentially intercept and decrypt the encrypted data even if it was transmitted securely over the trading partner's network.
What accurately describes one-way SSL authentication between a client (e.g., browser) and a server (e.g., web server)?
A. Client and server are authenticated
B. Only the server is authenticated while client remains unauthenticated
C. Only the client is authenticated while server remains authenticated
D. Client and server are unauthenticated
Only the server is authenticated while client remains unauthenticated
Explanation: One-way SSL authentication, often referred to as server-only authentication, involves the verification of the server‘s identity by the client during the SSL/TLS handshake process. In this scenario, such as when a browser connects to a web server, the server presents its digital certificate, containing its public key, to the client. The client, typically a browser, validates the certificate‘s authenticity by checking its digital signature and ensuring it is issued by a trusted Certificate Authority (CA). Once the server is authenticated, a secure communication channel is established, encrypting data exchanged between the client and server. Notably, the client remains anonymous in this process, as it is not required to present a certificate for verification. One-way SSL is commonly employed in secure web connections (HTTPS) where ensuring the legitimacy of the server is crucial for establishing a secure and encrypted session.
To derive accurate conclusions about the effects of changes or corrections to a program and ensure the absence of new errors, what should regression testing employ?
A. Data from previous tests
B. Independently created data
C. Contrived data
D. Live data
Data from previous tests
Explanation: Regression testing should utilize data from previous tests to draw accurate conclusions about the effects of changes or corrections to a program and ensure that no new errors have been introduced.
Which mapping of layer to protocol is incorrect in the DOD TCP/IP model?
A. Network Access layer protocol example is Ethernet
B. Application layer protocol example is Telnet
C. Transport layer protocol example is ICMP
D. Internet layer protocol example is IP
Transport layer protocol example is ICMP
Explanation: The incorrect layer-to-protocol mapping in the given information is that ICMP (Internet Control Message Protocol) is listed under the Transport Layer. However, ICMP actually works at the Internet Layer of the DoD TCP/IP model, not the Transport Layer. ICMP is responsible for providing error reporting, diagnostic, and control functions for IP (Internet Protocol).
When you decide to ensure that off-site data backup and storage are geographically separated, this is an example of which risk response technique?
A. Transfer
B. Mitigate
C. Eliminate
D. Accept
Mitigate
Explanation: To mitigate the risk of a widespread physical disaster like a hurricane or an earthquake, it is important to geographically separate off-site data backup and storage.
What is NOT true about Voice-Over IP (VoIP)?
A. Lower cost per call or even free calls, especially for long distance call
B. Lower infrastructure cost
C. VoIP is a technology where voice traffic is carried on top of existing data infrastructure
D. VoIP uses circuit switching technology
VoIP uses circuit switching technology
Explanation: VoIP (Voice over Internet Protocol) does not use circuit switching technology; instead, it utilizes packet switching technology. Circuit switching is a traditional telecommunication method where a dedicated communication path is established between two devices for the duration of their conversation. This method is commonly associated with traditional telephone networks.
What is an example of a diskless workstation?
A. Thin client computer
B. Handheld devices
C. Midrange server
D. Personal computer
Thin client computer
Explanation: Thin client computers, also known as lean, zero, or slim clients, are computing devices or programs that rely extensively on another computer, typically a server, to perform their computational functions. This is in contrast to traditional fat clients, which are designed to handle these functions independently. Thin clients are commonly used in environments where centralized management, reduced hardware complexity, and the ability to leverage server resources are advantageous. Diskless workstations are a specific example of thin client computers. In these setups, the workstations do not have local storage and depend on the server for data persistence and processing.
Who bears the ultimate responsibility for providing requirement specifications to the software development team?
A. The project leader
B. The project members
C. The project steering committee
D. The project sponsor
The project sponsor
Explanation: The project sponsor is ultimately responsible for providing requirement specifications to the software development team.
Which statement incorrectly describes devices and their position within the TCP/IP model?
A. Hub works at LAN or WAN interface layer of a TCP/IP model
B. Layer 4 switch work at Network interface layer in TCP/IP model
C. Router works at Network interface layer in TCP/IP model
D. Layer 3 switch work at Network interface layer in TCP/IP model
Layer 4 switch work at Network interface layer in TCP/IP model
Explanation: There is nothing called Layer 4 Switch, Network switches are either Layer 2 or Layer 3 (Known as Multi-layer Switch MLS)
Which transmission media uses a transponder to send information?
A. Coaxial cable
B. Copper cable
C. Satellite Radio Link
D. Fiber Optics
Satellite Radio Link
Explanation: Transmission media play a crucial role in communication, and various types serve distinct purposes. Fiber optics cables are ideal for long distances, challenging to splice, resistant to crosstalk, and difficult to tap. They support voice, data, image, and video transmissions. Copper cables, simpler to install but susceptible to tapping, are mainly used for short distances, supporting voice and data. Coaxial cables, invented by Oliver Heaviside in 1880, are costlier and unsuitable for many LANs but effectively support data and video transmission. Microwave radio systems utilize radio waves, particularly microwaves, for point-to-point communication. While they have a large bandwidth, their limitation to line-of-sight propagation is a drawback. Satellite radio links broadcast nationwide from satellites to cars, providing a broader programming variety than terrestrial radio but are susceptible to interception. Finally, radio systems, suitable for short distances and cost-effective, are prone to interception as well. Each transmission medium has its strengths and weaknesses, making them suitable for specific communication needs.
Which layer to protocol mapping is incorrect within the TCP/IP model?
A. Transport layer - TCP
B. Network Layer - ICMP
C. Application layer - NFS
D. Network layer - UDP
Network layer - UDP
Explanation: The UDP protocol works at Transport layer of a TCP/IP model.
Which exposure could be caused by a line grabbing technique?
A. Lockout of terminal polling
B. Unauthorized data access
C. Excessive CPU cycle usage
D. Multiplexor control dysfunction
Unauthorized data access
Explanation: Line grabbing will enable eavesdropping, thus allowing unauthorized data access, it will not necessarily cause multiplexor dysfunction, excessive CPU usage or lockout of terminal polling.
What is the concept of “defense in depth“?
A. multiple firewalls are implemented.
B. more than one subsystem needs to be compromised to compromise the security of the system and the information it holds.
C. intrusion detection and firewall filtering are required.
D. multiple firewalls and multiple network OS are implemented.
more than one subsystem needs to be compromised to compromise the security of the system and the information it holds.
Explanation: With defense in depth, more than one subsystem needs to be compromised to compromise the security of the system and the information it holds. Subsystems should default to secure settings, and wherever possible should be designed to fail secure rather than fail insecure.
Who is responsible for providing technical support for the hardware and software environment and managing the requested system?
A. System Development Management
B. Senior Management
C. Quality Assurance
D. User Management
System Development Management
Explanation: In the system development process, various roles and responsibilities contribute to the success of a project.
Senior Management demonstrates commitment and allocates resources, ensuring project involvement.
User Management takes ownership, allocates qualified representatives, and actively participates in key aspects, focusing on software reliability, effectiveness, ease of use, and regulatory compliance.
The Project Steering Committee provides overall direction and ensures stakeholder representation.
System Development Management offers technical support, while the Project Manager oversees day-to-day activities, aligning them with project goals and resolving conflicts.
The Project Sponsor provides funding, defines critical success factors, and owns data and application ownership.
The System Development Project Team and User Project Team complete assigned tasks, communicate effectively, and work according to local standards. The Security Officer ensures effective system controls, and Quality Assurance personnel review deliverables, ensuring compliance with requirements and proposing recommendations for improvement when needed.
Which database model allows many-to-many relationships in a tree-like structure that allows multiple parents?
A. Relational database model
B. Object-relational database model
C. Hierarchical database model
D. Network database model
Network database model
Explanation: Network database model: This model uses a graph structure to represent data, allowing a record to have multiple parent records, unlike a hierarchical model which only allows one parent per child.
Relational database model: While relational databases can handle complex relationships, they use tables and foreign keys to establish relationships, not a tree-like structure with multiple parents.
Object-relational database model: This model combines object-oriented programming concepts with relational databases. It doesn't inherently support a tree-like structure with multiple parents for many-to-many relationships.
Hierarchical database model: This model uses a strict tree structure where each record can only have one parent, making it unsuitable for many-to-many relationships.
In an enterprise data flow architecture, which layer is concerned with basic data communication?
A. Data preparation layer
B. Internet/Intranet layer
C. Desktop Access Layer
D. Data access layer
Internet/Intranet layer
Explanation: The Internet/Intranet layer (also known as the network layer) is responsible for routing data packets across networks. This layer uses protocols like IP (Internet Protocol) to determine the path data packets should take to reach their destination.
Data preparation layer: This layer focuses on transforming and preparing data for analysis, not on the actual communication of the data. It might involve cleaning, filtering, and formatting data.
Desktop Access Layer: This layer deals with how users access the network and its resources. It's about authentication, authorization, and providing a user interface for accessing network services.
Data Access Layer: This layer facilitates communication between the application and the database. It handles database queries and data retrieval, but not the underlying network transport.
Which protocol primarily provides confidentiality in a web-based application, protecting data sent between a client machine and a server?
A. SSH
B. S/MIME
C. FTP
D. SSL/TLS
SSL/TLS
Explanation: The protocol that primarily provides confidentiality in a web-based application, protecting data sent between a client machine and a server, is Transport Layer Security (TLS).
Which attack involves slicing small amounts of money from a computerized transaction or account?
A. Masquerading
B. salami slicing“ or “microtransaction fraud.“
C. Traffic Analysis
D. Eavesdropping
salami slicing or “microtransaction fraud”
Explanation: In a salami slicing attack, the attacker carries out numerous small, unnoticed transactions or transfers from various accounts, gradually accumulating funds without raising suspicion. The term salami slicing is derived from the idea of slicing off small and unnoticeable pieces, similar to how thin slices of salami are taken from a larger piece.
Which control is most effective in detecting accidental corruption during data transmission across a network?
A. Parity checking
B. Sequence checking
C. Symmetric encryption
D. Check digit verification
Parity checking
Explanation: Parity check is used to detect transmission errors in the data. When a parity check is applied to a single character, it is called vertical or column check. In addition, if a parity check is applied to all the data it is called vertical or row check. By using both types of parity check simultaneously can greatly increase the error detection possibility, which may not be possible when only one type of parity check is used.
Which type of risk is associated with authorized program exits, commonly known as trap doors?
A. Business risk
B. Inherent risk
C. Detective risk
D. Audit risk
Inherent risk
Explanation: A trap door is a hidden access point in a system that allows unauthorized access. This vulnerability exists regardless of any controls implemented, making it inherent to the system. Inherent risk represents the level of risk present before any controls are put in place to mitigate it.
Which comparisons are used for identification and authentication in a biometric system?
A. One-to-one for identification and one-to-many for authentication
B. One-to-many for identification and authentication
C. One-to-many for identification and one-to-one for authentication
D. One-to-one for identification and authentication
One-to-many for identification and one-to-one for authentication
Explanation: In identification mode, a biometric system conducts a one-to-many comparison against a biometric database to ascertain the identity of an unknown individual. The system succeeds in identification if the comparison of the biometric sample to a stored template falls within a predetermined threshold. Identification mode can be used for ‘positive recognition,‘ where no information about the template is provided by the user, or for ‘negative recognition,‘ determining whether the person denies being who they claim to be. In verification (authentication) mode, the system performs a one-to-one comparison of a captured biometric with a specific template in a database to confirm the individual‘s claimed identity. Management of biometrics involves ensuring effective security for the collection, distribution, and processing of biometric data. This includes addressing data integrity, authenticity, and non-repudiation. The management process covers the entire life cycle of biometric data, including enrollment, transmission, storage, verification, identification, and termination. It also encompasses the usage of biometric technology for both one-to-one and one-to-many matching in identification and authentication. Biometric technology can be applied for internal and external access control, as well as logical and physical access control. The security of the physical hardware used throughout the biometric data life cycle and techniques for ensuring the integrity and privacy protection of biometric data are also crucial components of effective biometrics management.
Which systems are designed to detect network attacks in progress and assist in post-attack forensics?
A. System logs
B. Audit trails
C. Tripwire
D. Intrusion Detection Systems
Intrusion Detection Systems
Explanation: Intrusion Detection Systems are designed to detect network attacks in progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems.
What risk could represent a threat to non-RFID networked or collocated systems, assets, and people in RFID technology?
A. Privacy Risk
B. Externality Risk
C. Business Intelligence Risk
D. Business Process Risk
Externality Risk
Explanation: The passage discusses the risks associated with Radio-frequency identification (RFID) technology. It emphasizes that RFID systems, which involve wireless non-contact data transfer through radio-frequency electromagnetic fields, may pose risks to other networked systems and assets. Externality risks are highlighted, including hazards from electromagnetic radiation in the RFID subsystem and the possibility of successful computer network attacks on networked devices and applications in the enterprise subsystem. The impact of such attacks can range from performance degradation to compromising mission-critical applications. The passage also outlines the complexities and changes introduced by RFID technology, which can result in business process risks, business intelligence risks, and privacy risks. The need for effective risk management in RFID implementations is stressed, considering the technology‘s potential to enhance efficiency and respond to customer requirements in various industries.
Which controls are effective in detecting duplicate transactions, such as payments made or received?
A. Referential integrity controls
B. Time stamps
C. Reasonableness checks
D. Concurrency controls
Time stamps
Explanation: Time stamps are an effective control for detecting duplicate transactions such as payments made or received.