Computer science topic 1- System fundamentals
1.1 Planning and system installation
1.1.1 Identify the context in which a new system is planned
Data gathering:
In planning stages there will be process of investigation and research into current systems involving the following methods with their respective pros and cons:
Interview
Pros- Detailed information with key people, two way communication and face to face
Con- Time consuming
Questionnaire
Pros- Data can reach large audiences quickly and can be used to create extensive analysis. Can be fast and effective
Cons- Has to be very well designed otherwise people may not answer properly or completely
Document collection
Pros- Good quality and objective data about inner working of system such as inputs and outputs taking place within system
Cons- May be limited or inaccurate or outdated
Observation
Pros- Can gain first hand and unbiased information
Cons- Observing may influence things or peoples behavior leading to unrealistic results
Requirements specification
Following collection of data, an analysis of the system needs to be done to fully grasp how it functions, with input process and outputs being identified alongside any problems (in current system).
This allows the analyst to understand what the requirements are for the new system, it is the guiding requirement specification list. This is a tightly controlled process for new system developments, so much so it appears in IEEE standard.
Requirement specification = Key document in a tradition life cycle that drives all subsequent work and serves as basis for project success and is often known as the success criteria
This stage of project takes into account context into which new system will be used and identifies any changes and new implementations required for new system.
Issues
Several issues are often involved with new system implementation. Clients may need retraining in order to learn to use new system and readjust, in extreme cases can lead to unemployment. Context of system in terms of contemporary technology also needs to be considered. Will the current hardware and software infrastructures be able to deal with new system´s technical requirements (e.g. bandwidth, storage processing)
1.1.2 Describe need for change management
Change management = Planned movement towards a future desired state
Change management can impact a variety or people or organisations depending on change required, hence requiring a structured approach
Can occur in many places like:
Hardware
Communication equipment and software
System software
All documentation and procedures associated with the running, support and maintenance of live systems
Why is it so important?
- Drastic changes that can impact everyone need to be managed and organised to ensure smooth transition to new system with minimal disruption
Real world example:
Change: Upgrading from Python 2 to Python 3
Potential Challenges:
Existing code might not work with Python 3.
Students and teachers may need to learn new syntax or libraries.
Benefits of Change Management:
A planned migration process can identify and fix compatibility issues before the upgrade.
Training sessions can help students and teachers adjust to the new version.
How does change management help address these challenges?
Planning and communication- Clearly explanation of reason for changes, how it will be done and a timeline for the implementation of new system keeps everyone informed and reduces uncertainty
Training and support- Providing helpful resources like tutorials helps users adapt minimizing frustration and ensures optimal use of new features.
Risk management: In order to avoid complications throughout implementation, potential problems that might arise are identified throughout change and developing stages and mitigated.
Factors to consider for successful change management:
People- How change impacts different user groups (teachers, students etc). Address concerns and provide appropriate support
Technology- Ensure compatibility of existing infrastructure and data with new system through thorough testing
Processes- How certain workflows might have to adapt to new system? Update documentation and procedures to reflect changes
1.1.3 Outline compatibility issues resulting from situations including legacy systems or business mergers.
Legacy systems and compatibility:
Legacy system- Older computer systems or software still in use that may not be compatible with newer tech, causing issues like:
Data loss
Erros
Wasted time and resources
Business Mergers and compatibility
When businesses merge, existing computer systems need to be compatible in order to allow for transfer of data and smooth operating.
Example- Two different companies may use similar but different ID fields or variables to store the core values meaning that reformatting would be required to merge the data.
International interactions and compatibility
When organizations collaborate internationally, issues from differences in the follow may arise:
Software- One company might use software incompatible with the other systems.
Languages- Software interfaces and data might be in different languages, hindering communication and collaboration
Social and ethical considerations
Compatibility issues involve ethical and social implications:
Accessibility- If a new system isnt compatible with assistive technologies, its not inclusive towards those with disability
Data privacy- Merging data from different systems raises user privacy and data protection concerns
1.1.4 Compare the implementation of systems using a client’s hardware with hosting systems remotely.
Choosing between local hardware and cloud are decisions to be made in consideration of their respective benefits and disadvantages
Cloud computing in itself has 4 main models that it offers:
Infrastructure as a Service (LaaS)
Platform as a Service (PasS)
Software as a Service (SaaS)
Network as a Service (NaaS)
Cloud computing:
Pros: Cons:
Convenience - Security
Security - Service outage
Backups - Storage limits
Collaboration - Slow speeds
Environmentally friendly - Limited features
Local computing:
Pros: Cons:
Security - Cost of hardware
Backups can be controlled - Technical support
Legacy software - Lack of collaboration
Software control
Feature control
1.1.5 Evaluate alternative installation processes
Types of system installations alongside pros and cons:
Direct- Old system is stopped and new system is started, no overlap.
Pros:
Minimal time and effort
New system is available immediately
Cons:
If new system fails there is no backup
Parallel- New system is started but old system is kept running alongside it. Data is input into both systems
Pros:
if new system fails old system serves as a backup
Outputs from both systems can be compared to see if new one is running properly
Cons:
Running both systems is costly in terms of time and money
Pilot- New system is piloted (trialed) in a small part of the business. Once it is running correctly the new system is implemented across the organisation.
Pros:
All features can be fully trailled
If new system fails only a small part of the organisation suffers
Staff who were part of pilot study can train other staff
Cons:
For section that is piloting, if system fails there is no backup
Phased- New system is introduced in phases as parts of the old system are gradually replaced with new system
Pros:
Allows people to get used to new system features
Training can be done in stages
If new part of system fails there is no backup for that area
1.1.6 Discuss problems that may arise as a part of data migration
Data migration- Process of moving data from one system to another
Common data migration problems faced:
Incompatible file formats- Different systems may use different file formats that need conversion during migration. Conversion can lead to data loss or corruption if not done carefully
Data structure differences- Data structures define how data is organized within a system. Incompatible structures can make it difficult to import data into new system.
Validation rule issues- Validation rules ensure data accuracy and consistency. Incompatible rules can lead to invalid data within a system.
Incomplete data transfer- Not all data might be transferred successful during migration, leading to missing information in the new system
Internationalization issues:
Data formats- Data formatas for dates, currencies, and measurements can vary internationally. Migration needs to account for these differences to avoid confusion and errors.
Character sets- Different countries use differnece charcter sets to represent text and incompatible character sets can lead to garbled text during migration.
Tips for successful data migration:
Planning- Thoroughly analyze data formats, structures and validation rules in both systems.
Data cleaning- Ensure data is clean and complete before migration to minimise errors
Testing- Test the migration process thoroughly to identify and fix compatiblity issues before full migration.
Internationalization standards- Use international standards for data formats to minimize compatiblity problems.
1.1.7 Suggest various types of testing
Types of testing
Alpha testing- Internal testing process performed by developers and testers who are employees of the organizaiton developing the software. Typically done in a lab environment and not in real world setting
Purpose- To catch bugs that were not discovered during earlier testing phases, focuses on functional correctness, security aspects and software bugs
Process- Involves with box and black box testing techniques and may also include simulated or actual operational testing environments.
Beta testing- Follows alpha testing and involves releasing software to a limited, external group of people outside of organization who test software in real world conditions
Purpose- Primary goal is to obtain feedback from users to identify any potential issues that weren’t caught during internal testing, including usability and compatibility issues as well as validating user experience.
Process- Less controlled and relies on diversity of user base and their environments to uncover unexpected errors
Gamma testing- Less common and typically refers to testing of software that is ready for release or very close to final version. Gamma testing may happen if big changes happen after beta testing.
Purpose- Ensures product readiness for release, particularly after significant late-stage fixes requiring new validation
Process- Like beta testing but less diverse user base and often done internally or very select group of external users.
Other types:
Feedback collection- to gather insights and performance data from users during or after the testing phase, with tools such as direct user reports, automated crash logs or usability surveys
Iterative improvement- Refining of features across rounds of testing feedback.
Test plans consists of 3 parts:
Details of what is being tested
Test data to use
What is expected to happen when test is performed
Three types of test data:
Normal- Data that would be expected to be entered in the system, system should accept it and process it
Extreme- Extreme values are still normal but are at the edges of acceptable and are used to see that the system responds correctly to all normal data.
Abnormal- Data that should NOT be accepted. System should be able to handle this data without crashing or breaking. Validation checks and the handling of exceptions are often used to ensure this data is handled well.
1.1.8 Importance of user documentation
User documentation- Explains how to use each part of a system, acts as a guide of systems features and characteristics. Poor or confusing documentations slows down implementations and increases support needs
The 3 main styles of user documentation:
Documentation type | Description | Best for |
Tutorial | step by step practical exercises | New users needing guided practice |
Thematic | Chapter/section based, systematically covering each feature | Users learning system feature by feature |
List/reference | Concise lists for quick lookup of commands or features | Technical or experienced users needing fast answers |
1.1.9 Methods of providing user documentation
Method | Advantage | Disadvantage |
Printed/ PDF/ online manuals | Can be detailed, structured and printable. PDFs and web pages are easy to distribute | May become outdated, long manuals can be hard to search or intimidating |
Help files e.g. tooltips, help panels | Contextual help exactly where needed inside interface | May be brief or limited; not a full replacement for full documentation |
FAQs | Quickly answers common usability problems in a direct manner | Limited to common issues, not as useful for complex issues. |
Live chat/ video support | Real time, human interaction; good for urgent problems and reassurance | Requires staff availability and time zones; may not scale well |
Using a mix of these methods gives users both self service options and direct support when needed.
1.1.10 Methods of delivering user training
Type of user training delivery | Advantages | Disadvantages |
Self instruction | Users learn whenever needed through manuals, videos etc supporting “just in time” learning. Saves costs on instructors venues and time away from work | Success depends on user motivation and ability to learn independently. Poorly. designed learning materials reduces effectiveness. |
Formal classes | Real time interaction with instructor and peers; immediate feedback and questions; structured environment focused on learning with a social element | Shy users may participate less; dominant personalities can control discussion, limited time of instructor per individual learner |
Remote/online training | Accessible anytime anywhere, inclusive of participants from everywhere, materials are accessible, reduction of discrimination on appearance age etc | Requires reliably tech and internet, differences in computer literacy can inhibit learning, user friendly platforms, lack of hands on & practical aspects. |
Choosing or combining these methods appropriately helps users gain skills efficiently, which directly affects how quickly and successfully a new system is implemented
1.1.11 Causes of data loss
Data loss in system back up is about understanding causes, consequences and prevention methods so systems stay reliable and recoverable
Cause | What it is | Prevention/ avoidance method |
Accidental deletion | Users accidentally deleting files | Backups; file recovery tools |
Computer viruses/malware | Malicious software or corrupt files | Up to date antivirus scans |
Physical damage | Hardware damage to drives (shock, heat, mishandling) | Careful handling; correct temps |
Continued use after warnings | Ignoring crashes, noises, SMART warnings before failure | Heed warnings, backup and replace failing drives |
Power failure Meth | Sudden loss of power corrupts or loses unsaved data | Use UPS to shut down safely |
Firmware corruption | Drive control software damaged so OS cannot see disk | Regular backups, hardware repair/board swap |
1.1.12 Consequences of data loss
Loss of critical operational data (e.g. medical records, booking systems) leading to service disruption or unsafe decisions)
Financial and reputational damage such as cancelled reservations without customers knowing or legal liability for losing records
1.1.13 Methods to prevent data loss
Method | How it reduces data loss | Example tools |
Regular backups | Allows resoration of data after deletion | Offsite or cloud backups |
Redundancy/ failover | Duplicate systems take over if one fails | Failover servers, RAID |
Removable/external media | Extra copies stored away from main system | External drives, tapes |
Offsite/online storage | Protects against local disasters and theft | Cloud backup services |
Antivirus and updates | Prevents malware-caused deletion or corruption | Symantec and similar tools |
UPS and surge protectors | Prevents sudden shutdowns and power related damage | Uninterruptible power supplies |
Monitoring (SMART) | Detects early signs of hardware failure so you can act | SMART drive monitoring |
1.1.14 Describe strategies for managing releases and updates
Release stages
Stage | Description |
Pre-alpha | Very early internal build; many features missing, unstable, used mainly by developers |
Alpha | Most main features added but still quite buggy; tested inside the organisation |
Beta | Feature-complete and given to external users to find bugs and usability issues |
Release candidate | Almost ready for release; no new features, only bug fixes |
General availability | Stable public version for everyday use; later changes are patches/updates |
Update strategies
Strategy | Advantage | Disadvantage |
Automatic updates | Software checks/installs updates itself effortlessly, better security and compatibility | Needs internet, can install at bad times or change behaviour without warning, less user control |
Manual | User chooses which updates to install and when avoiding unwanted changes | May forget or ignore updates potentially missing important security/bug fixes; may not know how to install |
Scheduled | Install at planned times to reduce disruption to users | Delays urgent patches; if schedule fails users may be stuck on older versions for longer |
Phased/gradual rollout | Limits impact of bad updates by testing on small group first; easier rollback if problems appear | Different users on diff versions at same time, making support and compatibility harder |
Why updates matter:
Performance issues- Not updating can cause slower programs, higher memory/cpu use, crashes and general instability reducing user productivity and system efficiency
Compatibility issues- Diff versions cant always work smoothly together, especially across locations. Outdated clients may struggle to exchange data or use new feature
Security angle- Old versions often have unpatched vulnerabilities, so skipping updates increases risk of malware, data breaches etc.