Sri-DS-roopa-130 Blockchain can be described as a distributed ledger where the public

Sri-DS-roopa-130

Blockchain can be described as a distributed ledger where the public can store all their transactions securely. Data in a blockchain is stored in a way that everybody can view, but only authorized persons can make changes to the data. In this age of information technology, data security is critical since people are sharing very sensitive information via the internet (Ghosh et al., 2020). As a result, blockchain technology is of greater help since it has enhanced people to store and share information securely due to the advanced security measures provided by blockchain technology.

Cryptocurrencies have made blockchain technology to be more popular recently. The introduction of cryptocurrency has enhanced people to securely make financial transactions online. In this digital era, people prefer to make their financial transactions through the internet.  As a result, people are opting to use cryptocurrencies to make their transactions since they are more safe and reliable when compared to other methods of payment. Cryptocurrencies have applied blockchain technology to ensure that people can make transactions virtually safely (Ghosh et al., 2020). The other benefit that cryptocurrencies have brought is that they have enhanced organizations to carry out smart contracts, thus eliminating intermediaries like banks. In general, the future of blockchain technology is bright in this digital era since people fully depend on the internet to make most of their operations.

 

Sri-DS-karthik-130

A Distributed Ledger Technology, better known as the blockchain. The blockchain is a decentralized ledger that stores all transactions on an open-source, public ledger called the blockchain. A ripple is a distributed open-source digital currency with the potential for rapid execution that supports instant payments between peers. It is built on a distributed computer network, a shared network of computers that run a set of software protocols (Javaid et al., 2021). Each peer on the blockchain works as a miner and allows the blockchain to operate normally. The mining is based on hash puzzles, which are used to find a solution to a mathematical equation. The hashes must match exactly, but the solution should be unpredictable. Anyone who can solve the hash puzzle will receive a piece of the transaction and the block reward. A block is created when a transaction of value is received from the user. The block contains the hash of this transaction, which is a message. Once a new block has been created, a user must wait until he receives all the transaction data and checksum blocks. However, Bitcoin allows the transfer of multiple blocks at any one time. In a peer-to-peer system, transactions and transaction block blocks are not kept in sorted order. Instead, they are shuffled randomly. A transaction or block has been created and is waiting to be published in the blockchain (Javaid et al., 2021).

The blockchain maintains a complete and accurate view of the entire network and its owners. There are three main categories of blockchain: public blockchains, private blockchains, and mining pools. First, a public blockchain will be the oldest and most trusted blockchain. The public blockchain can be thought of as digital copies of the natural world, where every transaction can be verified, documented, and reconciled before the data is sent over the public network. To ensure interoperability and consistency, every transaction on the blockchain must first be authenticated by a trusted authority, such as a government entity, a law enforcement agency, or a vendor. The blockchain is composed of a series of blocks, each containing a large, randomly generated hash pointer, used to identify the block on the blockchain (Kouhizadeh et al., 2021).

Chai-info-srija-130

Projects are a way of life. The ability to project in people, to communicate in a way that others understand, is an essential skill. It is the ability to make the world a better place. A Project provides many things to the other person: things that can be shared, things that can be known by anyone, things that can be understood, ideas to be held, experiences to be shared and learnt from, memories, and so on. Each Project consists of many things and many people. It is these elements that bring the project to life (Vitart et al., 2017).

A Project is different from most jobs we know because it comes with a set of skills, experience, and abilities that we don’t get through a normal workday. The difference is that we do things for our clients. We plan and develop the products and services need to improve our life and the lives of others. We help you be more effective in your work. We’ve been providing projects for people and organizations for over 30 years. The difference in our services comes from our team of highly trained professionals (Vitart et al., 2017).

In order to keep project running successfully and maintain customer relationships and company standards, top management commitment is absolutely important. This is even more so for any project than any other type of management. If top management says there is no time, or the budget is not sufficient, the project needs to be reassessed, reviewed and either cut back, or find alternative resources, or reassess the project, and try to find funding or time (Meredith et al., 2017).

Project management is the process of planning and organizing a team to accomplish a specific set of business objectives in accordance with a plan. Project management differs from most other types of management in that it does not involve an ongoing activity and is not expected to last for a fixed period of time. Project management is best characterized by a set of project activities such as planning, initiating, executing, monitoring and controlling (Meredith et al., 2017).

Project Managers, developers, and project owners must know the importance of these items and the impact on a project if they are not met. If they are not met, the project manager should know how to handle the situation and the best thing to do. There are a lot of challenges that can be faced in the development of any type of project and the importance of this is not in doubt. It is the responsibility of the project manager to know and understand each of the challenges and to mitigate them. This can be done either in a proactive way, or in a reactive way. The proactive mitigation for the project manager involves defining strategies and processes that can be used to mitigate the risk in the project. An example of a strategy in this context would be defining and implementing a risk tolerance model, or a plan for how to deal with and mitigate the risk (Meredith et al., 2017).

What is Agile Project Management?

Chai-info-raja-130

I.T. Project Management

A project refers to any undertaking that’s carried out either individually or collaboratively. In some cases, it can involve research or a carefully designed design to achieve a particular aim. There are different attributes of a project which include; they have a start and a finish point, they have a set budget that’s normally capitalized, the very prototypes of the mass-produced product would in most cases be considered a project, projects seek to come up with instant changes or benefits and projects have quite some steps that make up the project life cycle (Nelson, 2007). Projects are quite different from what individuals concentrate on doing in their day-to-day activities. Projects have goals and objectives that have some definite ending dates, while the day-to-day activities are activities done to sustain the business.

The top management commitment and development standards for successful project management are crucial in project management. The top management is tasked to perform lots of functions in project management. Specifically, they are tasked to facilitate employee empowerment and improved levels of job satisfaction through the leadership and commitment to the total quality management goal of customer satisfaction. For instance, this can be feasible by creating an organizational climate that influences total quality and customer satisfaction. They equally serve to provide a sense of enlightenment to the managers who present as leaders of the different zones in a particular organization. Top management support is technically considered one of project management’s most critical success factors, as effective executive involvement certainly improves project success levels (Pollack, 2007). Despite these factors, the literature doesn’t provide organizations with any clear list of effective top management support practices for achieving such support. Some of the unique challenges experienced by the I.T. projects include; a mid-project adjustment, poor communication between teams, use of murky delivery models, being in touch with the remote stakeholders, the lack of required project management practices.

Sush-ET-kranthi-130

Web Server Auditing

COLLAPSE

Top of Form

There are various ways that can be utilized to detect the week web server configurations. Comprehending the Software and Hardware architecture of the web server that is utilized is the first step to identify weak which server configurations. Another method that can be utilized to identify quick web server configuration is to get a minimum knowledge about the attack surface that is present and is related to the threats that can be occurred. In order to prevent the attacks and ensure a secure web server first of all it is very important to make sure that the web server is fixed as securely as possible (Hadi & Nahari, 2011). In order to ensure a secure web server, one should eliminate the excessive services. In general, there will be many services present in the web server by default that are not more useful this increases the probability of the attack. 

So, eliminating these excess services can improve the performance of your web server. It is also advised to set the permissions and privileges in your network. In order to audit the web service security and implement best practices, it is advisable to remodel your articles and applications. Make sure that your domain and IP are clean this is possible with the help of the MX toolbox (Ioannou, et, al, 2019). The utilization of strong passwords for your personal and other user accounts can also help to audit your own website security. The utilization of SSH and the addition of SSL will also be useful in this process.

Bottom of Form

sush-ET-arun-130

Identifying and Mitigating Weak Web Server Configurations

            Business continuity is essential to service delivery for a firm to satisfy its customers’ demands. A business’s continuity depends on the strategies it places on its critical resources to ensure that events or attackers will not impact their operations. Web server security is vital since it maintains service delivery to the users who access services via the web. Given the variety of internet threats, organizations should regularly assess their systems to address security concerns across various platforms continually.

           There are various ways that organizations can identify weak web server configurations. First, they should maintain standard log management to allow the collection and management of the required logs. Second, the administrators should prioritize the entries with their timestamps and frequencies when collecting web server logs. HTTPs status codes are vital in security incidents, and one should understand the codes to tell which information is accessed depending on the server’s response to the client request. Profiling the webserver and applications helps the organization identify attacks quickly since the webserver applications generate dynamic content. Anomaly detection for web server allows the detection of various anomalies within the log files.

           Firms can adopt various security measures to harden their web servers. First, they can disable the signature to ensure that the server does not show information on the server version, recent errors, and directory information when errors occur. Second, companies can disable HTTP trace and track requests to prevent cross-site scripting attacks on the webserver. Creating non-root users also helps avoid mistakes made from many non-root accounts. Besides, companies can disable SSLv2, and SSL v3when enabled, data transfer across these methods poses a security threat due to lack of encryption (Pavithra & Pari, 2015).

Pra-ET-rahul-130

Web server auditing software allows an auditor to view a list of the systems, applications, and operating system components running on the system. This information provides an understanding of the attack surface and the impact on an organization’s system’s protection. An auditor can also monitor system processes to examine the attack surfaces by monitoring process events (Pohan et al., 2021). Monitoring is a complex and time-consuming task, and many security vendors do not allow their systems to be audited. Webserver configurations can be made configurable, and configuration-aware application programmers should be given the ability to perform real-time configuration for these systems. Webserver configurations SSL certificates for the webserver SSL certificate file configuration of the webserver SSL configuration for the webserver SSL certificate web server SSL cert file configuration of the HTTP server SSL configuration, application server configuration of the application server SSL configuration, application server configuration of the memory handling configuration, application server configuration of the network configuration (Pohan et al., 2021). a remote user typically performs webserver configurations SQL injection. When a remote user enters the configuration file, the SQL statement is executed. The SQL statement retrieves the query, constructs the query object, and runs the query against the query object. The result is stored in the query object’s variable. The variable name is a hash, and the variable type must have a different name. SQL injection can also be accomplished by an application on the affected system: vulnerability migration and least privilege auditing. The auditing of vendor artifacts is also an essential step of the mitigations performed by a forensically sound vendor. To audit the web server’s security and the design of the server (Srinivasan et al., 2021). This was a critical part of the overall security posture of the organization.

An attack surface attacks the physical system that executes code (Srinivasan et al., 2021). As a result, an attacker must physically attack the system to compromise its functionality. The Windows operating system uses a security sandbox to restrict where an attacker can run their malicious code. The attack was successful but caused a great deal of disruption to the company. The company’s website was unavailable for nearly a week; the web application servers were down for a few weeks, and the Web server was sold to another organization. Some security researchers believe that the event could have been prevented had the system been protected. They believe that the organization did not take the necessary precautions. It is challenging to know which practices will be of the most significant value to the organization (Srinivasan et al., 2021). However, from the perspective of the security architect, the practices that are most likely to be implemented and most likely to be tested within the constraints of the particular application security posture that has been chosen represent a more or less limited set of controls.

Pra-ET-navya-130

SQL injection attacks are utilized to harm the website. When an attacker discovers that information areas are not sterilized appropriately, we can expand SQL strings to nastily formulate an interrogation that is performed by the web browser. We may classify violent or independent data in the database; when the website is proposed, it will demonstrate unrelated data on the website, therefore exhibiting a defaced website. The further employment and tasks that a web server is operating, the further chances a probable hacker will have to influence the network; because of this, an easy but handily ignored reasonable method is to destroy and turn off any excessive employment, ports, or tasks (Satari, 2008).

If the webserver is utilized for just restricted intentions such as interior executive data sharing, hosting a fixed website, or testing and developmental actions, it can be configured to only permit certain IP discourses or systems. If excessive employment is enabled or insolvency composition lists are utilized, lengthy or mistake data is not concealed; an aggressor can risk the webserver through varied attacks like password cracking, Error-based SQL injection, Command Injection, etc. There are automated appliances for searching a web server and petitions operating on it. Websites are hosted on web servers. Web servers are themselves computers operating an operating system; pertaining to the back-end database, operating several petitions. Any exposure in the petitions, Database, Operating system, or in-network will direct to an attack on the webserver. Related to destroying web server signatures, web servers also by insolvency exhibit the quantity of the reports and lists in the inception catalog when an index.html file is ignored (Dharam & G. Shiva, 2014).

This suggests that a probable attacker could perceive all of the lists and subdirectories that are illustrated to the browser. If we installed the webserver with the insolvency operating network composition, then there is an elevated opportunity that there are several different or unrequired modules operating.

Xav-OE-bharath-130

Software Development Life Cycle in short SDLC is a systematic process of developing software applications (Singh, 2020). It provides detailed guidelines for developing software in an efficient and timely manner. Typically, it involves six phases namely planning, analysis, designing, development, and testing. Waterfall methodology is popular and was predominantly implemented for over decades. The problem with this methodology is that it is highly documentation driven and the requirements should be comprehensive before starting the development work. It necessitated the need to develop new methodologies. Lean and iterative methodologies were introduced as alternatives to a waterfall methodology.

Lean is a software development methodology that is inspired by lean manufacturing principles and practices (Half, 2019). The main idea of the lean manufacturing process is to eliminate waste while making the other processes efficient (Half, 2019). Lean development methodology emphasizes the elimination of waste and enhances customer satisfaction (Half, 2019). In that way, it can be associated with agile methodology.  

The iterative methodology involves a process of repetition. The main advantage of this model is that the development team can start the development work with initially identified requirements (Half, 2019). The main advantage of this methodology is that it produces a working version of the project early in the process. The methodology also has drawbacks such as the repetitive process can consume resources quickly. Similar to the SDLC process, each phase of this methodology involves processes such as modeling, analysis and design, implementation, testing, and deployment (Half, 2019).

 

Xav-OE-shanmukesh-130

The Software Development Life Cycle (SDLC) provides an organized approach to preparing, generating, and testing new projects. Different methodologies are available as means to utilize the SDLC, based on the specific requirements of the users involved. Two of these methods are the RAD and RUP methodologies.

RAD Methodology

The Rapid Application Development (RAD) methodology “emphasizes extensive user involvement in the rapid and evolutionary construction of working prototypes of a system, to accelerate the systems development process” (Baltzan, p.319).

RUP Methodology

The Rational Unified Process (RUP) methodology breaks down the development of software utilizing four gates: inception, elaboration, construction, and transition. RUP is an interactive methodology that allows the user to “reject the product and force the developers to go back to gate one” (Baltzan, p.320).

Compare and Contrast

The RAD and RUP methodologies both allow a respectable level of influence by the end-user. For the RAD method, the user has a substantial say in the design of the system. For the RUP method, the user can halt all progress at specific strategies, or “gates”, in the development process if requirements are not being met. Both methods utilized multiples stages in the lifecycle of a project. Baltzan (2020) highlights that the RAD method involves system users in the analysis, design, and development phases while the RUP method has its development of software process divided into the inception, elaboration, construction, and transition gates (p.319-320). If a user wants to have a noticeable level of involvement in the projects they invest in, either one of these methods could prove to be suitable.

According to Baltzan (2020), these two methods differ in their overall approach as RAD is aimed at faster production while RUP is more focused on meeting specific user requirements. Also, the RUP method has the advantage of helping developers by means of reusing amounts of progress to address normal difficulties while RAD is aimed at the end result based on prototype modification (p. 319-320). If I was a user starting a time-sensitive project and wanted a high level of influence on its development, I might want to go with the RAD method. If time was not near as much of an issue as the specifications of the end result, I would lean more towards the RUP method.

Xav-intro-divya-130

1. What is a false discovery rate?

            False discovery rate can be referred to as the expected proportion of the error that is due to false positivity. This false discovery rate is used with the statistical approach for multiple testing for the hypothesis, which can help understand numerous comparisons (Tan et al., 2018). These are used to understand the random events that are falsely observed as significant values. These are falsely determined to reject the false discoveries and can be efficiently used to understand different influences on the data. This null hypothesis is tested to observe the statistically significant scores to measure the confidence of the P-value and compare it with the threshold. The k-hypothesis is tested to determine the confidence level and occurrence of false positives (Tan et al., 2018).

2. Can a false discovery rate be completely avoided?  Explain.

            The false discovery rates cannot be completely avoided as the data can have false positives due to the selected data sets. These false positives can impact the results and can be controlled by increasing confidence levels (Tan et al., 2018). The data will be selected to increase the confidence levels and reduce the false positives. This increase in confidence level can increase because of sample selection. The confidence level can be increased to reduce the false discovery rate. These false positives are reduced and the low proportion can be used with a comparative approach (Tan et al., 2019).

3. What was the outcome of the results of the use case?

In this case, the results have shown that random field theory and false discovery rate thresholds are qualitatively identical for various datasets. These are dissimilar to the different data sets that have non-trivial changes. The simulation studies show that there is true signal weakness and has convergence between the random field theory and false discovery rate thresholds. The sample sizes affect the relationship between random field theory and the false discovery rate of the sample. This relationship is varying in the sample less than 15 and the samples with the larger size are stable. This result has shown the relationship that can be regulated to reduce the false discovery rate (Naouma & Pataky, 2019).

Xav-intro-tushar-130

False Discovery Rate

A false discovery rate can be described as an approach utilized in numerous hypothesis testing to correct various comparisons. Typically, a false discovery rate is due to false discoveries (Tan et al., 2016). A false discovery rate is mainly used in high-throughput experiments to distinguish whether an observed score is statistically significant. A false discovery rate is vital because it provides an individual with numerical estimates of how enriched the accepted discoveries of the individual are for factual findings. Also, false discovery rate values can be used as a prior probability of the actual conclusions in follow-on confirmation experiments.

How a False Discovery Rate be Completely Avoided

Typically, a false discovery rate occurs when a null hypothesis is rejected mistakenly, leading to a false positive. The family-wise error rate is close to the false discovery rate. The false discovery rate is an opening of coming to at least one incorrect conclusion. A false discovery rate can be controlled or avoided by utilizing the Bonferroni adjustment, which safeguards against making one or more positives (Pellegrina et al., 2019). However, false discovery rates can be overly rigorous in other fields leading to missing findings. Therefore, instead of safeguarding against making false-positive conclusions, the false discovery rate technique is used as an alternative to the Bonferroni adjustment correction and controls to minimize the number of false positives.

Outcomes of the Results of the Use Case

The fundamental purpose of the outcomes of the use case results is to offer feedback on the use case. On the other hand, the key to the input is to acquire a mechanism whereby the current configurations can be taken and changed into something faster or enhanced (Tan et al., 2016). Typically, this is achieved by integrating the contemporary design with the latest one or by utilizing different configurations with the existing layout. Additionally, the use case is a report that alludes to the widespread impact and is based on scenarios whereby different metric values were selected for numerous transactions, transaction frequency, and the number of transactions.

Chan-ID-aneef-130

A false discovery rate is the expected proportion of type 1 errors. A type 1 error is where you incorrectly reject the null hypothesis; therefore, they get a false positive. The false discovery rate approach generates a diverse interest that adaptively spans the entire spectrum from multiplicity control to non, based on the data encountered (Benjamini, 2010). The FDR helps researchers and statisticians identify as many comparisons as possible but still encounter a low false-positive rate. The FDR advantages are simplicity, scientific relevancy, and signal adaptability (Naouma & Pataky, 2019).

            In order to reduce the false discovery rate, scientists would have to decrease the value of α, which essentially decreases ß in conjunction. There is also an algorithm used to control the FDR. The procedure first orders the p-values in ascending order, and it then used a different significant level for each of the tests (Tan et al., 2019). To calculate the probability that of zero false discoveries, a single hypothesis test is conducted. Provided is the probability test: P (false discovery) = α.

The outcome of the use case indicated that the FDR is sensitive to the 1-D signal change, and it is confirmed as a susceptible testing method (Naouma & Pataky, 2019). Both the FDR and RFT methods are computationally efficient; however, FDR is more adaptable within dataset granularity and indicates lower threshold signal strength. The case study also suggests that both FDR and RFT are methods that support results for small sample sizes and the continuum sizes they represent (Tan et al., 2019).

Chan-divya-130

1. What is a false discovery rate?

            False discovery rate can be referred to as the expected proportion of the error that is due to false positivity. This false discovery rate is used with the statistical approach for multiple testing for the hypothesis, which can help understand numerous comparisons (Tan et al., 2018). These are used to understand the random events that are falsely observed as significant values. These are falsely determined to reject the false discoveries and can be efficiently used to understand different influences on the data. This null hypothesis is tested to observe the statistically significant scores to measure the confidence of the P-value and compare it with the threshold. The k-hypothesis is tested to determine the confidence level and occurrence of false positives (Tan et al., 2018).

2. Can a false discovery rate be completely avoided?  Explain.

            The false discovery rates cannot be completely avoided as the data can have false positives due to the selected data sets. These false positives can impact the results and can be controlled by increasing confidence levels (Tan et al., 2018). The data will be selected to increase the confidence levels and reduce the false positives. This increase in confidence level can increase because of sample selection. The confidence level can be increased to reduce the false discovery rate. These false positives are reduced and the low proportion can be used with a comparative approach (Tan et al., 2019).

3. What was the outcome of the results of the use case?

In this case, the results have shown that random field theory and false discovery rate thresholds are qualitatively identical for various datasets. These are dissimilar to the different data sets that have non-trivial changes. The simulation studies show that there is true signal weakness and has convergence between the random field theory and false discovery rate thresholds. The sample sizes affect the relationship between random field theory and the false discovery rate of the sample. This relationship is varying in the sample less than 15 and the samples with the larger size are stable. This result has shown the relationship that can be regulated to reduce the false discovery rate (Naouma & Pataky, 2019).