• Category
  • >Information Technology

What is FCAPS (Fault, Configuration, Accounting, Performance and Security)?

  • Yashoda Gandhi
  • Dec 08, 2021
What is FCAPS (Fault, Configuration, Accounting, Performance and Security)? title banner

Introduction

 

The ISO Telecommunications Management Network model and methodology are used to create the FCAPS model and framework for network management. FCAPS are acronyms for fault, configuration, accounting, performance, and security.

 

In the 1990s, the ITU-T updated the FCAPS as part of the Telecommunications Management Network (TMN) Management Functions suggestion as part of their Telecommunications Management Network (TMN) effort (TMN), FCAPS is a paradigm that Network Operators and Service Providers may use to compare Management and Monitoring System capabilities and features. 

 

The FCAPS model is a universal paradigm that is utilized for Element Management Systems (EMS), Network Management Systems (NMS), and Operation Support Systems (OSS).

 

Users have distinct preferences, and not all management and monitoring systems are created equal. Users of monitoring and management systems require a reference model to determine the solutions' strengths and flaws.

 

The operational objectives of network management are divided into five tiers by FCAPS. Fault management (F), configuration level (C), accounting level (A), performance level (P), and security level (S) are the five levels (S).

 

(Suggested read: What is Security Misconfiguration and Vulnerability Management?)

 

 

Details of FCAPS

 

  1. Fault management

 

  • Creating the necessary circumstances for proper network and CI operations;

  • Monitoring the overall health of the network and detecting threats; notifying administrators of probable system breakdown;

  • locating and pinpointing the source of failures; 

  • Continuous data logging for analysis and correlation in order to facilitate automated fault resolution.

 

Early fault detection, isolation of negative consequences, fault repair, and recording of the corrections are among the goals and objectives. 

 

To ensure problem identification, evaluation, and fast remedy, the network operator must ensure that (typically) automatic fault notification is followed by swift human or monitored automatic operations. 

 

When a configuration item (CI) fails, or when an event interferes with or prohibits appropriate operation or service delivery, faults arise.

 

 

  1. Configuration management

 

Configuration management is a systems engineering method that ensures a product's properties remain consistent throughout its life cycle. Configuration management is an IT management technique that tracks individual configuration pieces of an IT system in the realm of technology. 

 

IT systems are made up of a variety of Information Technology assets with varying levels of granularity. A piece of software, a server, or a cluster of servers can all be considered IT assets.

 

Software configuration management is a systems engineering procedure for tracking and monitoring changes to the metadata of a software system's configuration. some examples of software configuration metadata are:

 

  • Specifications for CPU, RAM, and other computational hardware resource allocations.

  • External connections to other services, databases, or domains are specified through endpoints.

  • Passwords and encryption keys are examples of secrets.

 

 

  1. Accounting management

 

Accounting management is concerned with managing network consumption data in order to bill or charge individual users, departments, or business units correctly for accounting purposes.

 

While this may not be true for all businesses, the IT department is often viewed as a cost centre that generates money based on resource use by specific departments or business units in bigger firms. "Administration" substitutes "accounting" for non-billed networks. 

 

The objectives of administration are to manage the set of allowed users by creating users, passwords, and permissions, as well as to manage the equipment's functions, such as software backup and synchronization.

 

(Related reading: Extended Detection and Response (XDR))

 

 

  1. Performance management

 

Network performance management is concerned with the network's efficiency. Throughput, % utilisation, error rates, and response times are all addressed by the network performance function. The ability to collect and analyse performance data aids in satisfying SLAs and capacity planning.

 

Similar to fault management, past performance data must be analysed to address capacity or reliability concerns before they influence service requirements. One of the most prevalent issues is determining if bandwidth is being used wisely. 

 

Certain forms of traffic, such as VoIP calls, may require policy configuration to grant bandwidth priority. Once this is completed, VoIP monitoring may be used to confirm that the calls are of the proper quality.

 

 

  1. Security management

 

Security Management is a Network Management role that focuses on safeguarding the network as a whole as well as individual devices from malicious or unintentional misuse, illegal access, and data loss. 

 

Security Management is also in charge of establishing limitations for each controlled element based on standards and specifications.

  

Implementing an SNMP-based Network Management System without taking security into account might be a major issue, especially in commercial networks. 

 

Security should be a high focus even for home networks to guarantee that critical data is not publicly accessible or easily accessed. It's simple to comprehend security by categorizing security functions, discussed by snmpcenter, as follows

 

  • Authentication

 

Authentication is the process of determining a person's identity, usually through the use of a username and password or, in some situations, biometrics  I.e fingertips.

 

Authentication differs from authorisation in security systems, which is the process of granting persons access to system objects based on their identification. 

 

Authentication only confirms that the person is who he or she claims to be, but it says nothing about the person's access permissions.

 

  • Authorization

 

The process of granting individuals access to system objects based on their identification is known as authorization. Both authentication and authorisation are obviously necessary to identify the individual (authentication) and provide him access privileges.

 

  • Segmentation

 

Network segmentation entails dividing the managed network into logical domains, which are then allocated to roles or users to limit access to domains and NEs.

 

The administrator does network segmentation by creating one or more Network Access Domains (NADs) and assigning them to existing Domains / Network Objects.

 

  • Secure Communication

 

The term "secure communication" refers to ensuring that the protocols being used are secure or have their secure features activated. The following is a typical secure communication checklist:

 

Check that only secure protocols are used to communicate with network elements, such as the NMS using Secure FTP (SFTP) rather than FTP and Secure Shell (SSH) rather than Telnet. Check that bespoke SNMP credentials, rather than the default public/private credentials, are being utilised.

 

  • Server Hardening

 

Server hardening is a task that necessitates IT expertise.Unnecessary services and software packages that operate on the server should be deleted as part of the hardening process.

 

Eliminating predefined security settings by customising the database and operating system.Plain text passwords are being phased out in favour of encrypted passwords.

 

(Also read: What is Serverless Computing?)

 

 

Five level management

 

The operational objectives of network management are divided into five tiers by FCAPS. Fault management (F), configuration level (C), accounting level (A), performance level (P), and security level (S) are the five levels (S).

 

1.  Fault management level:

 

At the fault management level, network defects are recognised and corrected. Future issues are detected, and efforts are done to prevent them from arising or repeating. The network stays functioning using fault management, and downtime is reduced.

 

2. Configuration management level: 

 

At the configuration management level, network performance is monitored and regulated. 

 

Hardware and programming modifications are coordinated, including the installation of new equipment and programs, the modification of existing systems, and the removal of old systems and programs. Equipment and program inventories are kept and updated on a regular basis at the C level.

 

3.  Accounting management level

 

The accounting management level, also known as the allocation level, is responsible for allocating resources among network subscribers in an optimum and equitable manner. This maximizes the efficiency of the systems available while lowering operating costs. The A level is also in charge of ensuring that users are properly invoiced.

 

4.  Performance management level

 

The network's overall performance is managed at the performance management level. Throughput is boosted, network bottlenecks are reduced, and potential concerns are handled. Identifying which modifications will result in the greatest overall performance increase is an important aspect of the process.

 

5.  Security management level

 

The network is protected against hackers, unauthorized users, and physical or electronic sabotage. Where required or warranted, the confidentiality of user information is preserved. Network administrators can also restrict what each authorized user can (and cannot) do with the system using security mechanisms. (source)

 

(Recommended read: Security Analytics guide)

 

 

Future of FCAPS

 

Many of the ideas of FCAPS are already outmoded, despite their complexity and breadth. It has to be modified to match the current reality of network infrastructure management.

 

The FCAPS security management approach was created in a period before cloud computing, when ownership, accountability, and control were clear and simple. It was simple to infer that certain entities owned and controlled assets both implicitly and explicitly during the time. That is no longer possible.

 

Because we operate with virtualized servers, problem detection is more difficult when applications are hosted in the cloud. Different tenants, for example, might be affected by a defect caused by the same source, such as an overloaded link or server. Constant device updates and additions also lead to setup mistakes and, finally, failures.

 

As a result, to guarantee regulatory compliance, enterprises must safeguard data, apps, and services that run in the cloud. The cloud computing platforms  are service providers who own and operate the equipment, on the other hand, share some of this duty.

 

As a result, we must rethink FCAPS' function in the cloud and on-premises. We need to explain how FCAPS increases virtualized environment stability, availability, provisioning, orchestration, cost effectiveness, and data security.

Latest Comments