Dark Matter

Introduction
Dark matter is a composition of elements in the universe which are preserved in the form of primordial fluctuations in cosmological density. The existence of dark matter is associated with the formation of supersymmetry prediction of new families of particles which interact weakly with ordinary matter. The growth of dark matter is recorded to have started early that resulted in the formation of galaxies. Dark matter contributed to the provision of gravitational potential in which stable structure in the universe was formed. Dark matter enhanced the existence of galaxies, groups, as well as clusters. The existence of combining forces referred to as the dark energy combined different structures and expanded the space between bound particles that formed structures such as the Local Group of galaxies.

Discussion
Dark matter determines our existence as well as the future combination of elements. According to dark matter theory, a cosmic inflation has become the basis for the standard model of big bang cosmology called Lambda cold dark matter or Lambda-CDM (?CDM) (Colloquium on the Age of the Universe, Dark Matter, and Structure Formation, 1998). Lambda-CDM is concerned with the formation and existence of cosmic microwave structure data as well as other cosmic rays which include; the distribution of galaxies, high concentration in abundance of hydrogen gas including deuterium, helium, as well as lithium. Dark matter as one of the cosmological density and occupies about 23% of cosmic density. It has the potential of having the dark energy of about that occupies 72%. The baryonic matter occupies up to only about 4.6% while the visible baryons occupy about 0.5% of the cosmic density (Colloquium on the Age of the Universe, Dark Matter, and Structure Formation, 1998).

The Existence of Dark Matter
Initially, the infant universe was characterized by extremely hot, dense, homogeneous mixture of photons and matter. The composition of the universe was tightly coupled together as plasma. The initial characteristics and conditions of this early form of universe plasma are thought to be established long time ago during a period of rapid noncontrolled expansion referred to as inflation. The rapid expansions were contributed by high-density fluctuations within the primordial plasma (International Symposium on Cosmology and Particle Astrophysics, He, & Ng, 2003). The effects were catalyzed by quantum fluctuations within the field of tightly held plasma material which drove the inflation. The high amplitude of the primordial gravitational potential that fluctuated uniformly on all spatial scales led to the formation of small perturbations as new forms of energy.

The small perturbations usually propagate via the plasma collision in the form of a sound wave. It produces under as well as overdensities in the plasma combined with a simultaneous change in density of matter and speed of radiation that influences fluctuation in pressure (International Symposium on Cosmology and Particle Astrophysics, He, & Ng, 2003). However, CDM does not contribute towards sharing pressure induced during the oscillations. It usually acts upon as gravitational forces that either enhances or negates the acoustic patterns of the photons and baryons. The continuous pressure exerted results to initialization of physical conditions that contributes to expansion and rapid cooling of plasma (International Symposium on Cosmology and Particle Astrophysics, He, & Ng, 2003). During the high pressure, the conditions reach a point where electrons, as well as baryons, are capable of gaining stability and recombining. As a result, the state leads to the formation of atoms, commonly in the form of neutral hydrogen. In the process photons finally, decouple from the baryons a condition that leads to the plasma becoming neutral. The formed perturbations cease from propagating as acoustic waves. The preserved and existing density pattern becomes cool and frozen. The cooled and frozen snapshot of the various density fluctuations get preserved within the Cosmic Microwave Background (CMB) anisotropies others get embedded as an imprint of baryon acoustic oscillations (BAO) observable (International Symposium on Cosmology and Particle Astrophysics, He, & Ng, 2003).

The resulting recombination is one associated with the production of large neutral universe characterized by unobservable fields of the electromagnetic spectrum. The period in which the named reactions occur is referred to as the era of dark ages. During the era, some reactions that occur includes CDM beginning gravitational collapse, especially in the overdense regions. Baryonic matter transforms to some gravitational forces that lead to collapsing of the CDM halos and influences the beginning of Cosmic Dawn (International Symposium on Cosmology and Particle Astrophysics, He, & Ng, 2003). The radiations begin with the formation of initial radiation sources such as stars. Produced objects give a lot of radiation that causes other objects to re-ionize through the intergalactic medium. Most of the structures formed to continue to grow to merge other matters under the influence of gravitational pull. The result of the forming materials is a vast cosmic web of dark matter density. The radiating particles cause availability of abundant luminous galaxies that traces the statistics of different underlying matter density. Most of the assembled objects include clusters of galaxies that form the largest bound objects. The reorganization of dark matter results into galaxies that retain the BAO correlation length during the process of formation of CMB energy.With the increase in the dark matter, the universe continues to expand influencing the accumulation of the negative pressure which is associated with the cosmological constant. The constant is in the form of dark matter or dark energy in?CDM (International Symposium on Cosmology and Particle Astrophysics, He, & Ng, 2003). The matter usually increases and dominates over opposing gravitational forces leading to the expansion of the universe.

Types of Dark Matter
The composition of the universe is usually dominated by the cosmic density of both dark energy and dark matter. What the most common physical nature of dark matter remains is yet to be discovered. There are two popular families of dark matter that try to explain about the dark matter particle. They include lightest supersymmetric partner particle also referred to as super-symmetric weakly interacting massive particle (WIMP). A WIMP is one of the weakly interacting dark matter components (Colloquium on the Age of the Universe, Dark Matter, and Structure Formation, 1998). The basic idea behind WIMP particles is that billions of these black matter particles pass human hand within a second. They also pass through the Earth and everything on it. However, WIMPs interact weakly with other particles as well as ordinary matter. Due to their weak interactive reaction, they almost create their impacts entirely unnoticed.

It is a neutrino and is invisible while passing through a typical elementary particle detector. However, through some other properties of the other dark matter particles produced in association with the WIMP, it is possible to recognize their events and select them for analysis. Some of the most specific analysis has been done in models of supersymmetry.

The other particle is the cosmological axion. WIMPs and axions are the most common dark matter particles. Super-symmetry is among the standard models of particles that constitute energy that allows control of vacuum energy as well as used for renormalizing gravitational interactions. It plays a role that allows gravity to be combined with both the weak electronic components as well as strong interactions Super-symmetry dark matter makes it easy and possible for a grand unification of the weak electron materials and brings strong interactions that naturally explain the scaling of smaller particles in the universe. The forces of generated from the gravity lead to the grand unification of Planck scales that leads to solving the gauge hierarchy problem. The establishment of a connection between supersymmetry dark matters breaking with weak-electro symmetry leads to increase in mass that forms a range of about 100 to 1000 GeV (Colloquium on the Age of the Universe, Dark Matter, and Structure Formation, 1998). The state leads to the formation of WIMP cosmological density that maintains balances in the universe. Many other particles including Axion have been indicated as possible dark matter candidates that also follow the principles of supersymmetry (Colloquium on the Age of the Universe, Dark Matter, and Structure Formation, 1998).

Dark matter refers to a term that explains objects available in the universe in the form of the missing mass. It is a standard cosmological event of big bang model. Dark matter interacts with normal matter through the gravity. However, dark matter neither absorbs nor emits radiation and thus making it impossible to be seen. According to big bang cosmologists, they explain that about 25% of the universe is composed of dark matter. The major elements consist of non-standard particles which include neutrinos, axions also known as weakly interacting massive particles WIMPs. About 70% of the known universe is composed of models made up of more obscure dark energy components. The entire composition leaves a 5% of the universe composed of ordinary matter.

Dark Matter (DAMA) Experiment

A remarkable experiment referred to as Dark Matter (DAMA) is well known for using three styles of detectors to facilitate discovering of wimps. The experiment is designed in an exactly similar manner like experiments used in detecting and in the study neutrinos (Cerulli, et. al., 2017). However, DAMA is designed to look for a specific reaction. DAMA is designed to find the energy generated as a result of an interaction with a particular element at a particular angle.

The DAMA experiment has three phases which include processes, having two research and development (R&D) setups as well as one actual experiment that considers the results of the R&D. The basic idea behind DAMA experiment is that since the galaxy rotates at high speed of 232 km/s, the rotation enhances sweeping via the residual CDM material. The study involving the reaction of particles ensures the high possibility of using experiment illustrations to detect the WIMP contents of CDM possibly. The phases are as explained below.

Phase one:
The first phase uses Adhesive silicone CaF4 which is designed to look for a2? decay. The experiment is designed in that format to eliminate known leptons. The phase 1 experiment is set with the intention of determining signs of WIMP detection (Cerulli, et. al., 2017). When the expected results are successful, the second phase is designed as follows

Phase two:
The second phase makes use of isotope of xenon 129Xe; it is used to since it has a high sensitivity that detects R&D. Its superiority allows identifying of three WIMP particles which include photinos, higgsinos, and Majorana Neutrinos (Cerulli, et. al., 2017). After successful results are obtained through detection, the session opens for phase three which involves the actual experiment.

Phase three:
LIBRA – Large Sodium Iodine Bulk for Rare processes

Sodium Iodine (NaI) detectors experiment is set up after the two R&D phases. The results obtained should reveal that the experiment determines the presence of particles that clarify characteristics that qualify particles to be WIMP’s (Cerulli, et. al., 2017).

The DAMA project is a project carried to certainly determine the existence of some particles that resemble the requirements of wimps. The results obtained from DAMA experiment are revealed characteristics of particles such as the mirror symmetry which is a theory of particle physics. As indicated from researchers it is true that every particle of matter has a mirror particle. The experiment reveals that mirror matter particles consist of the sole of CDM (Cerulli, et. al., 2017).

LUX Experiment
The Large Underground Xenon (LUX) is a dark matter experiment, which is designed to operate underground beneath a mile of rock. It is located in Sanford Underground Research Facility in the Black Hills of South Dakota (Chapman, et al., 2013). The LUX experiment is designed to look for black matter referred to as weakly interacting massive particles (WIMPs). WIMP is considered as the leading theoretical candidate that consists of dark matter particle. The LUX detectors are composed of a third of a ton composed of cooled liquid xenon. It is usually surrounded by powerful sensors which are designed basically for detecting minute and a tiny flash of light (Chapman, et al., 2013). They also detect the electrical charges emitted incase a WIMP particle collides with a xenon atom within the reaction chamber or tank. The detectors are specifically located at Sanford Lab underground one mile of rock. It is usually found inside a 72,000-gallon tank, with a high-purity water tank. The configuration and setup help in shielding it from dangerous cosmic rays as well as effects of other radiation that can easily interfere with a dark matter signals. The scientist makes use of calibration techniques using neutrons as stand-ins for managing and controlling WIMPs particles (Chapman, et al., 2013). The effect is achieved through firing a beam of neutrons in the detectors. By achieving that scientists gain capability of carefully quantifying the process in which LUX detectors responds to the signals produced from a WIMP collision (Chapman, et al., 2013). Other forms of calibration techniques applied include injecting radioactive gasses inside the detecting chamber to help in distinguishing between signals produced during ambient radioactivity as well as potential dark matter signal.

Persuasive Research Paper

Introduction
Our society has encountered many controversial issues one of these being that of the allowance of the ownership of firearms as per the second amendment of the constitution of the United States. Debates crop up every dawn with regards to why the constitution should license the ownership of guns even by individuals or citizens. Apparently, today the government of the US faces the challenge of limiting the cases of assaults these rifles have been involved in. The thing is, though the government has all the powers to enact laws, it faces the stiff challenge of governing raffles ownership and usage since citizens who have them cite the second amendment. There may exist all prudent reasons why gun ownership should be allowed. Nevertheless, I remain discrete and adamant that licensing citizens to firearms is risky to the lives of the same citizens. In this paper, I will discuss the reasons or arguments for gun’s ownership that people give and give the reasons why licensing citizen’s rifle ownership is not plausible. Finally, I will give my stand on this issue and why the government and its citizenry should think deeply about eliminating the clause that allows free ownership of firearms.

Arguments for guns
One of the reasons that some individuals give for their owning a gun is protection from their counterparts who may have a gun (Lott, 2010). These are individuals who defend their ownership of guns by stating that there is no other way they can protect themselves and their family from intruders who have guns unless they also have guns (Lott, 2010). Apparently, judging with no bias, these defenses could be ultimately true. However, I am brought back to cases that depict contrary to the above claims. Sometimes ago, a Colorado teen shot and killed in a prank. Later in the news, an 18-year-old girl was shot and killed by a close family friend who justified her deeds by saying that she thought she was a home intruder. These are just a few cases of events captured by the media; there could be more cases of accidental shooting done by children or adults during rampages and other events that the media was not able to capture. No matter what reason given why these accidental shooting happened, they shine a glimmer of light on the reality of the situation. Whenever a gun is involved, there is the probability of life being lost or somebody lying in the hospital fighting for his/her life (Kleck, 2005).

Second Amendment
My appeal that guns are awful and should be banned in the community will face the challenge by the second amendment that calls it a ‘right’ to a person to own a gun if he/she wants to. This amendment that “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed (Whitney, 2012).” In simpler terms, the second Amendment states that anyone has a right to bear arms. In any case, I understand that it would be illegal to ban guns in the community. However, my argument for the review of this clause is about safety and not about defying the law.

Flaws with the Second Amendment
There are many defects that I can cite in any law that permits the ownership of arms by every individual in the society. Firstly, everyone understands that the police or any other section of the armed forces of the United States or any other country undergoes a large period of training before they are allowed to take arms and go out to defend the citizens. From this statement, what we deduce is that for an individual to be allowed to handle a firearm, he/she should undergo ethical training of how to use the gun and at what circumstances (Charles, 2009). Apparently, though the amendment permits the ownership of these arms, it does not demand that the person is trained in handling the arm and after satisfying some formal conditions. The federal law allows a person to purchase a gun as long as he is over 18 years of age (Charles, 2009). The only people nullified from this right are fugitives, illegal aliens, indicted persons and those who have a criminal history. Apparently, even though a person may not fall in the above cases of people, it does not mean he/she cannot misuse the firearm. We cannot wait for one person to kill or injure another so that we can take the step of denying him/her the right to own a gun.That is the reason there are dozens of people dying each day from gun violence. Since the Second Amendment was adopted in 1791, 31 federal court cases about gun laws have been witnessed. Out of these, six were presented in United States District Courts, 19 in the Courts of Appeals, while the remainder has reached the Supreme Courts (Kleck, 2005).

Secondly, firearms are expensive, which implies that only rich people in the community may have the capacity to purchase. As such, this will further the gap between the poor from the rich in the society. It will make the poor feel insecure in the hands of the rich yet it can be said that these is only because of the Second Amendment.

The Bigger Picture
Even though the federal laws set regulations stating who can purchase a firearm, we still hear of dozens of cases involving deaths of people caused through guns. In the above sections, I cited the people nullified from gun ownership. These are all people who may lose control and end up misusing the firearms. However, my question is, how will the gun-dealer identify if a person is mentally unstable, if he/she has a criminal record such as domestic violence, is a fugitive or an illegal alien? That shows how immeasurable and vague the regulations set for the ownership of guns are. In reality, these laws just exist for the sake of appearance but do not function. Even if they were there, the life of the shot person could not be recovered by the arrest of his/her shooter (Kleck, 2005).

Nullifying the law permitting the free ownership of firearms would be my first appeal. If this is not possible, tougher regulations for gun handling are required to ensure the safety of our society. I persuade the government to ensure this to happen because guns kill, accidentally or on purpose. The bigger picture shows that on average; about 280 people suffer accidental gunshots. Out of these, some are on purposes and are categorized as murder where people kill others or themselves on purpose. Some are accidental and may happen during police intervention (Kleck, 2005

Oracle vs sql server

Introduction
Any enterprise evaluating a database management system solution for their data should also evaluate the systems regarding their management of data. The capabilities of the servers to properly manage data is the essence of having them in organizations since data is one of the most critical assets in any business whereby the success of any organization depends on how well it can use its data to make business decisions. The data needs to be available and its integrity and confidentiality preserved. If the data of organizations is not available or not protected, the enterprises are likely to lose millions of dollars due to unplanned downtime and negative publicity. Having good data management in organizations is critical to ensuring business success in today’s economy. The document offers a detailed comparative assessment of the two most popular servers, that is, Oracle and SQL Servers in light of data management.

Overview of Data Management in Oracle vs. SQL Server

One of the greatest challenges in the design of high availability information technology (IT) infrastructure is addressing the issues of data management. Data management, when it is done properly can reduce downtime that many organizations always face (Bassil, 2012). The IT companies should consider the potential causes of both planned and unplanned downtime when they are designing their IT infrastructure. We know that a server does play a dispensable role in managing the data so that it is always available and in the form I which it was saved. That helps in the continuity of business operations because they highly depend on that data. Data failures in organizations can result from the human errors, data corruptions or disasters, a, so it is the responsibility of a database management system to make sure that it includes features that can highly manage the data so that it is not negatively impacted by those events (Oracle Corporation, 2013).

While data failures are not frequent, their adverse effects on the business operations are very significant because it results in high costs of downtime. The database management system used, whether Oracle or SQL Server should allow for the maintenance activities to take place transparently, and that causes no or minimal interruptions to the normal business orations. The Oracle database comes with a plethora of capabilities integrated to ensure that organizations can minimize data failures as much as possible so that they do not adversely impact their businesses (Callan et al., 2010). For instance, the Oracle Multitenant is a new option in Oracle that delivers a groundbreaking technology useful in database consolidation as well as cloud computing. It also makes extreme high data and system availability a fundamental requirement wherever database consideration ahs application to the business critical applications.

The Microsoft Corporation introduced the Always On solution in their SQL Server for the purpose of addressing the issue of high availability and disaster recovery requirements. The major features that are inclusive in SQL Server include the always on failover to address the instance failovers and the AlwaysOn Availability Group to address the failover of a set of databases (Bassil, 2012). Although the SQL Server introduced these new capabilities to better manage data, it cannot match the breadth and depth of the data management capabilities found in Oracle (Oracle Corporation, 2013). The SQL Server continues to lag behind regarding data availability and for that reason, the Oracle database has application in many companies that require high data management and system availability. There are also many differentiators as explained below regarding how data management takes place in Oracle and SQL Server. Note that the SQL Server referred to here is the SQL Server 2012 whereas the Oracle database referred is the Oracle 12c EE (Enterprise Edition).

The Oracle database incorporates an inbuilt database failure detection, repair, and analysis whereas the SQL Server lacks this feature (Oracle Corporation, 2013). In the light of this matter, Oracle includes a fast-start fault recovery functionality that controls the instant recovery. The feature helps in the reduction of the required time for cache recovery as well as recovery bounded by limiting the dirty buffers and redo records that are regenerated between the most recent record and the last checkpoint. The fast-start checkpointing in Oracle eliminates the bulk writes as well as the resultant I/O spikes that do occur in the case of conventional checkpointing (Callan et al., 2010). Unlike in SQL where the database is not open for applications to access only after the undo or rollback phases, the Oracle database is accessible by applications without the need to wait for the completion of rollback of undoing phases (Kumar, 2007). In the latter, in case, the user process encounters a crashed transaction that locks a row, what the database does is that it rolls back that row.

Whereas the SQL Server stores the undo data in the log files, the Oracle database stores similar data in the database, making the recovery process is very fast in Oracle than in SQL (Callan et al., 2010). The SQL Server has to carry out expensive sequential scanning of the log files, hence increasing the mean time to recover from a data failure. Also, in Oracle, there is an incremental backup strategy while SQL server supports partials backup strategy. Oracle also incorporates a proactive disk health checks using an automatic corruption repair while the SQL Server does not have such a feature (Oracle Corporation, 2013). The data manager in Oracle does not have to check manually for the health of the disks because of that automatic feature. That simplifies the work of managing the data in Oracle as compared to the same tasks in SQL Server where there have to be manual disk health checks.

The standby apply progress in the Oracle databases does not have any performance impact on the primary database or data protection whereas in SQL Server it does have an impact (Callan et al., 2010). However, in Oracle, there are silent corruptions that can be detected as a result of software/hardware faults at the standby and the primary database; such defaults are not detectable in SQL Server. Oracle can quickly recover from logical corruptions and the human errors while in Oracle, it takes a long time to recover (Bassil, 2012). That is because SQL Server puts much responsibility on the hands of the database manager whereas the Oracle database comprises of features that make the recovery automatically. The Oracle database includes the integrated and automatic database failover that guarantees a zero data loss as well as a no split-brain (Kumar, 2007). That feature is lacking in SQL server, thus making it vulnerable to data loss.

The SQL Server’s AlwaysOn feature is a failover cluster instance running in a failover cluster that comprises of multiple failover clustering nodes for Windows. That offers a high availability via redundancy especially at the instance level (Kumar, 2007). That can also be useful in providing remote disaster recovery using multi-subnet failover cluster instance. It allows the hosting of an availability replica by either a failover clustering instance or a standalone instance of the server. That means the database manager can use the failover clustering instance for local instance-level high availability as well as the Availability Groups to offer database level disaster recovery. That feature may give an impression that it is similar to Oracle’s data guard and real application clusters. While the SQL Server’s failover clustering instances and the secondary nodes are all passive: are offline and do not start the SQL Server instances in a steady-state, the Oracle data guard and the real application clusters do start their respective database instances in a steady-state mode and are always online (Kumar, 2007). That is useful in data management at all times.