Categories
Writers Solution

service using notional demand and inventory data

Create a semiannual production plan for your new business idea, product, or service using notional demand and inventory data. This initial production plan is based on your market estimates of what you intend to sell and produce. The final paper is managing the project to implement your intended new product/service into the marketplace, but you have to create a production plan that is supported by your market forecasts, and that is the purpose of this assignment.

 Prompt: The plan should replicate the techniques in the text and can be submitted in a basic tabular (spreadsheet) format. It must include the following: 

 Estimates of labor hours consumed 

 Estimated number of worker requirements considering a standard work week, current inventory levels, receipts of new inventory during each month, and varying demand levels for each month of production For service businesses that do not include inventory or raw goods for the assembly line, the inventory of the support materials/equipment or consumable materials can be used. Specifically, the following critical elements must be addressed: 

 Create a semiannual production plan using notional demand and inventory. 

 Estimate the labor hours consumed.

  Estimate the number of worker requirements considering a standard work week, current inventory levels, receipts of new inventory during each month, and varying demand levels for each month of production. 

Rubric Guidelines for Submission: This short paper should adhere to the following formatting requirements: it is submitted as a Word document, 1 to 2 pages (not including title and reference pages), double-spaced, using 12-point Times New Roman font and one-inch margins. All APA citations should reference the course text and at least two additional resources.

GET THE COMPLETED ASSIGNMENT

ASSIGNMENT COMPLETED AT CapitalEssayWriting.com

MAKE YOUR ORDER AND GET THE COMPLETED ORDER

CLICK HERE TO ORDER THIS PAPER AT CapitalEssayWriting.com ON service using notional demand and inventory data

NO PLAGIARISM, Get impressive Grades in Your Academic Work

Categories
Writers Solution

Create a semiannual production plan for your new business idea, product, or service using notional demand and inventory data

Create a semiannual production plan for your new business idea, product, or service using notional demand and inventory data. This initial production plan is based on your market estimates of what you intend to sell and produce. The final paper is managing the project to implement your intended new product/service into the marketplace, but you have to create a production plan that is supported by your market forecasts, and that is the purpose of this assignment.

 Prompt: The plan should replicate the techniques in the text and can be submitted in a basic tabular (spreadsheet) format. It must include the following: 

 Estimates of labor hours consumed 

 Estimated number of worker requirements considering a standard work week, current inventory levels, receipts of new inventory during each month, and varying demand levels for each month of production For service businesses that do not include inventory or raw goods for the assembly line, the inventory of the support materials/equipment or consumable materials can be used. Specifically, the following critical elements must be addressed: 

 Create a semiannual production plan using notional demand and inventory. 

 Estimate the labor hours consumed.

  Estimate the number of worker requirements considering a standard work week, current inventory levels, receipts of new inventory during each month, and varying demand levels for each month of production. 

Rubric Guidelines for Submission: This short paper should adhere to the following formatting requirements: it is submitted as a Word document, 1 to 2 pages (not including title and reference pages), double-spaced, using 12-point Times New Roman font and one-inch margins. All APA citations should reference the course text and at least two additional resources.

GET THE COMPLETED ASSIGNMENT

ASSIGNMENT COMPLETED AT CapitalEssayWriting.com

MAKE YOUR ORDER AND GET THE COMPLETED ORDER

CLICK HERE TO ORDER THIS PAPER AT CapitalEssayWriting.com ON  Create a semiannual production plan for your new business idea, product, or service using notional demand and inventory data

NO PLAGIARISM, Get impressive Grades in Your Academic Work

Categories
Writers Solution

Network Inference from Time-Series Data Using Information Theory Tools

NETWORK INFERENCE USING INFORMATION THEORY TOOLS 1

NETWORK INFERENCE USING INFORMATION THEORY TOOLS 2

Network Inference from Time-Series Data Using Information Theory Tools

Name:

University

Abstract

The Mutual Information Rate (MIR) measures the time rate of data exchanged between two non-random and correlated variables (Budden & Crampin, 2016). Since microscopic elements in complex systems are not purely random, the MIR is a fitting quantity to access the sum of information exchanged in intricate systems. However, its calculation requires infinitely long capacity with arbitrary resolution. Having in mind that it is impossible to perform infinitely long measurements with perfect accuracy, this work shows how to estimate the MIR taking into consideration this elemental limitation and how to use it for the classification and understanding of dynamical and multifarious systems. Moreover, we introduce a novel normalized form of MIR that successfully infers the organization of small networks of interrelation dynamical systems (Arabnia &Tran, 2011). The proposed inference methodology is robust in the presence of additive noise, different time-series lengths, and heterogeneous node dynamics and coupling strengths. Moreover, it also outperforms the inference method based on Mutual Information when examining network formed by nodes possessing different time-scales.

Network Inference from Time-Series Data Using Information Theory Tools

Analyzing complex systems is a difficult process for many people in the world today. Very few tools have been created to aid in such a process in an effective way. Additionally, network inference and complex system analysis require mathematical and computer skill that are not readily available to everyone. A successful analysis can only be carried out by an individual who is acquainted with the proper mechanisms and has the necessary understanding of the organization’s dynamics (Sameshima & Baccala, 2014). Complex systems are characterized by many interacting components that arise and evolve over time. As such, a proper analysis of the system must entail a progressive approach that takes into account the changes that occur over time. moreover, an ideal complex system analysis tool should be balanced in such a way that it takes into account essential microscopic elements that are of importance to the expected outcome while ignoring other components whose presence or absence should not interfere with the results (Deniz, 2018). Consequently, regardless of the similarities in different complex systems, a modeling tool must be customized to meet the needs of the specific network for proper inference.

Many systems of the world can be referred to as complex. Social networks, political organizations, human cultures, the internet, brains, the stock markets, and the global climate are all examples of complex systems. In each of the mentioned organizations, important information is achieved through the interaction of various components within the system (Dehmer, Streib & Mehler, 2011). While each part is important, none can operate alone to produce the results that an entire system would create. Moreover, the various components that interact to create useful information are not static which makes it hard for the complex systems to be analyzed. Network inference of a time series data in a complex system implies that an individual will need to understand the relationships, if any, that exist between variables and how such can be altered to create the desired change.

Characteristics of a complex system can be coupled up into two concepts, namely emergence, and self-organization. Some system properties appear at different intervals in a process called emergence. Mathematical models allow one to understand the factors and relationships behind these macroscopic properties at a given point in time (Bossomaier, 2016). Analyzing the new occurrences at varied scales gives an individual an idea behind the operations of a system which allows for better planning for the future. Moreover, the properties self-organize over time creating a series of events that form the basis of an organization or process. Mathematical modeling practices help to simplify the complexity of the system thus making it a fundamental practice in everyday life. Since complex systems are characterized by nonlinear dynamics, achieving a possible solution by looking at the inputs alone is not possible. Information theory tools are the only approaches that can help in unraveling the mysteries behind nonlinear combinations and creating unreachable realities (Arabnia & Tran, 2011).

Networking inference is an increasingly growing field with researchers proposing new models each day. To make the right choice, one has to look at the limitations and the advantages of each proposal (Goh, Hasim & Antonopoulos, 2018). While some information theory tools are successful, they are limited in terms of how far or how deep they can unravel the complexities of nonlinear systems. The common structures that are found in diverse networks pose a great challenge when creating a reliable inference method. In information theory, the measure of the dependence between two separate variables creates mutual information (MI). To get MI, one has to quantify the amount of data acquired from one variable by observing the other random variable (Barman & Kwon, 2018). If the correlation coefficient of the two variables is zero, then the two properties are not essentially related to one another and their interactions do not affect the performance of the system. Analyzing and understanding the relationships between the microscopic elements of a complex system is the best easiest and simplest way of understanding the intricacies of a complex system.

In a natural complex system situation, it is hard for one to detect physical methods because of the large size. However, using each of the components as nodes of a network and the physical interactions between the nodes as links helps in understanding the exaggerated behavior of complex systems. To detect the physical methods of a large organization, it is vital to infer network structures that create the physical correlation between time-series acquired from the dynamics of the various nodes. Cross-Correlation or MI dynamics are ideal mechanisms to use while trying to quantify the relationship between variables within a complex system (Budden & Crampin, 2016). As such, the current paper will be based on a mutual information rate (MIR) methodology to infer the structure of a complex system from time-series data.

According to Ta, Yoon, Holm & Ham (2010), a mutual information rate (MIR) shows the relationship between two variables by measuring the time rate at which information is exchanged between two correlated and discriminate variables. The MIR is an appropriate tool for measuring the relationship between variables in a complex system because it allows for long measurements and calculations with arbitrary resolution. The tool makes it possible for an individual to analyze the unique properties of a system to understand the relationship between causes and effects. Through the MIR, the researchers in the current study intend to quantify the amount of information passed between two non-random nodes within a given period. Moreover, the tool will aid the team in understanding the relationship between synchronization and the exchange of information in a system (Timme & Casadiego, 2014). The purpose of the examination is to establish if there are any logical inference between microscopic elements of a complex system and the dependence among the variables.

The network inference in the current study will be founded on the rule-based modeling approach that pays particular attention at microscopic scales within an establishment. Since complex systems are diverse and extremely complicated, the time-series data used in the scrutiny process can be easily simulated in a computer to help the analyst appreciate the emergence and self-organization of properties in the system over time (Shandilya & Timme, 2011). Rule-based modeling allows one to explain the observed behavior in a simple language that is understandable to people without mathematical and computer skills. Further, the modeling process employed by the current paper is important in the sense that it helps the involved parties to make considerable predictions of the future and map a clear path that a system is bound to follow over time.

Main Body

Discussion of the Mathematical Theory

Systems produce information that can be transferred between different components. For such an exchange to happen, two independent variables either directly or indirectly linked must be involved (Zou, Romano, Thiel, Marwan & Kurths, 2011). In the current paper, the mode of transfer studied is time-series data where the amount of information exchanged within a given unit of time is examined to determine the link between the non-random elements. Further, the relationship between information synchronization and the speed of transfer will also be looked at in the paper. A positive outcome (the existence of a link between two units) is an indication of a bidirectional connection between the variables as a result of their interaction. Through such an understanding, it is possible for one to correctly infer a network of a complex system and map the future of the organization with clarity.

Mutual Information: The MI between variables indicates the amount of uncertainty that one has about one variable by observing the other unit (Butte & Kohane, 2000). The MI is given byIxy (N) = Hx+Hy-Hxy. The equation shows the strength of dependence existing between the two observed variables. For instance, when Ixy=0, it means that the strength of dependence between the elements observed is null, an indication that the two variables are independent. As such, the higher the value, the stronger the connection between variables and the higher the chances of their interaction producing a considerable effect on the overall performance of the complex system.

The calculation of Ixy(N) from a time series data is a difficult task. One has to calculate the probabilities computed on an appropriate probable space where a partition can be found (Bianco-Martinez, Rubido, Antonopoulos & Baptista, 2016). Moreover, the MI value measure is only suitable for carrying out a comparison between variables of macroscopic elements of the same system and not different structures. For a time series data to produce verifiable and usable results, the correlation decay times must be constant which is not possible when looking at information in different systems. As such, an MI is only viable if the factors analyzed are of a singular system to avoid the different characteristic time-scales produced via the varied correlated decay times in each organization.

Understanding entropy and conditional entropy is the first step towards having knowledge of how MI works in analyzing time-series data. Qualitatively, entropy is a measure of uncertainty – the higher the entropy, the more uncertain one is about a random variable. This statement was made quantitative by Shannon. He postulated that a measure of uncertainty of a random variable X should be a continuous function of its probability distribution PX(x) and should satisfy the following conditions

· It should be maximal when PX(x) is uniform, and in this case, it should increase with the number of possible values X can take

· It should remain the same if we reorder the probabilities assigned to different values of X

· The uncertainty about two independent random variables should be the sum of the uncertainties about each of them.

The only measure of uncertainty that satisfies all these conditions is the entropy, defined as: H(X) =−∑xPX(x) log P(x) =−EPXlogPX (2). Although not particularly obvious from this equation, H(X) has a very concrete interpretation. Suppose x is chosen randomly from the distribution PX(x), and someone who knows the distribution PX(x) is asked to guess which x was chosen by asking only yes/no questions. If the guesser uses the optimal question-asking strategy, which is to divide the probability in half on each guess by asking questions like “is x greater than x0 ?”, then the average number of yes/no questions it takes to guess x lies between H(X) and H(X)+1. This gives quantitative meaning to “uncertainty”: it is the number of yes/no questions it takes to guess random variables, given knowledge of the underlying distribution and taking the optimal question-asking strategy.

The conditional entropy is the average uncertainty about X after observing a second random variable Y and is given by

H(X|Y)=∑yPY(y)[−∑xPX|Y(x|y)log(PX|Y(x|y))]=EPY[−EPX|YlogPX|Y](3)

Where PX|Y (x|y)(≡PXY(x, y)/PY(y)) is the conditional probability of x given y.

With the definitions of H(X) and H (X|Y), the equation can be written as:

I(X; Y) =H(X) −H (X|Y). (4)

Mutual information is, therefore, the reduction in uncertainty about variable X, or the expected reduction in the number of yes/no questions needed to guess X after observing Y (Dehmer et al., 2011). Note that the yes/no question interpretation even applies to continuous variables: although it takes an infinite number of questions to guess a continuous variable, the difference in the number of yes/no questions it takes to guess X before versus after observing Y may be finite and is the mutual information. While problems can arise when going from discrete to continuous variables since subtracting infinities is always dangerous, they rarely do in practice.

Different approaches to the computation of MI exist. The variations in each method arise as a result of the mechanism used to compute the probabilities involved in the computation. In the histogram method, also called the bin, a suitable partition is found in the 2D space on equal and adaptive size cells. In the density kernels, the kernel estimate of the probability density function is applied. The last MI approach quantifies data by estimating probabilities from the distance between the closest variables (Zou et al., 2011). In the current analysis, the first approach where computation of probabilities is carried out in partitions of equally sized cells in the probabilistic space generated by two variables is used. The process has a tendency of overestimating the values because of two basic reasons, namely the finite resolution of a non-Markovian partition and the finite length of the recorded time series. The systematic errors can be avoided by creating a novel normalization when dealing with MI computations.

For the numerical computation of IXY(N), the paper defines a probabilistic space X, where X is formed by the time-series data observed from a pair of nodes, X and Y, of a complex system. Moreover, a partition X into a grid of N_N fixed-sized cells is created. The length-side of each cell, €, is then set to € = 1/N (Budden & Crampin, 2016).Consequently, the probability of having an event I for variable X, PX(i) is the fraction of points found in row I of the partition X. Similarly, PY(j) is the fraction of points that are found in column j of X, and PXY(i, j) is the joint probability computed from the fraction of points that are found in cell(i, j) of the same partition, where i, j = 1;…; N. The paper emphasize here that IXY(N) depends on the partition considered for its calculation as PX, PY, and PXY attain different values for different cell-sizes €.

Mutual information brings a reduction of uncertainties concerning one variable by observing another element whose performance is believed to affect that of the former unit. High mutual information signifies a great reduction of uncertainty while low mutual information is an indication of a small reduction of ambiguity.

Mutual Information Rate: calculating the MIR of a time series must take into consideration the partition dependence discussed in the definition of MI. MIR is defined as the theoretical mutual information exchange within a given time between variables, say X and Y. while the calculation of MIR using the MI principle can arise into errors in relation to the earlier mentioned partitions, other mechanisms of computing the quantity of information passed between variables at a specific ensure that the measure is invariant with respect to the resolution of the partition (Ta et al., 2010). To estimate the information passed between two finite nodes in the current paper, the observed time-series data at a given point I time is computed followed by a proper normalization for the identification of the connectivity structure of small networks of interacting dynamical systems.

MIR is a powerful concept in the analysis of complicated systems. The quantity (MIR) is calculated from mutual information which is defined by random systems within the organization. In the current paper, the researcher offers a simple way of calculating MIR in diverse networks and looking at the upper and lower bounds within a system without having to take into consideration probabilities.

In the current paper, various topologies for the network and different dynamics for the components of the dimensional systems are considered. The network inference, therefore, is done from times-series data that is observed and recorded for each component to determine the topological structure of the components interaction. The purpose of the paper is to determine if the function of one variable is affected by another non-random element by looking at the amount of information passed between the two nodes in a given unit of time. Moreover, the paper will seek to determine if synchronization of data affects the speed of information exchange between variable. Positive or negative values from this analysis will help in figuring out the type of dependence between microscopic elements of the system if any while providing an avenue for the researcher to map the future of the magnificent system.

Background

The paper introduces a new information-based approach for the analysis of networks within complex systems. The MIR computes data transferred per unit of time between two different nodes whose interaction is believed to cause a series alteration in the performance of the magnificent system (Barman & Kwon, 2018). The normalization of MIR used in the paper is measured based on the developed network for inference. The tool is a reliable measure of interdependency between variables in the presence of other additives such as noise, short-time series, as well as other coupling strength complications. The MIR is designed in a way that it can only detect and react when the most important variables in the system are triggered, especially the correlation decay time.

One of the aspects that make the MIR an essential tool is the fact that it embodies the characteristics of a great modeling and measurement tool. Research has shown that proper analysis mechanisms must be sensitive enough to the necessary variables while ignoring other occurrences within the system (Timme & Casadiego, 2014). As stated earlier, complex systems are characterized by the emergence of new elements at as time progresses. Therefore, it is hard to take into consideration all the new variables at each stage of development when trying to map up the future of the system. A model that is able to discard minor changes is an essential tool in the measurement of new elements at different scales.

To achieve this discriminatory role in network inference, researchers use various modeling mechanisms such as rule-based modeling (Butte & Kohane, 2000). The practice of modeling is an effective one in mathematical and computer science studies because it allows researchers to unravel the unreachable realities in life. Naturally, complex systems are magnificent and quite complicated for anyone to analyze. The amalgamation of elements and the constant interrelation between nodes within the system makes it hard for one to determine if the elements have any relationship and the nature of interactions among the nodes. Modeling helps one to create sustainable and reliable tools that are able to take into account some aspects of the system while ignoring the interactions of others.

Rule-Based Modeling

Modeling a complex system requires one to consider the multiple networks, nonlinearity, emergence and the self-organization characteristics of enormous organizations. In rule-based modeling, particular attention is paid at the microscopic scales because looking at the interaction of variables is the best way of understanding the complexity of the system (Goh et al., 2018). The model helps individuals to explain observed behaviors, in our case, the time series data. Moreover, rule-based modeling helps researchers and analysts to make predictions and map the possible progress of the system with certainty.

Various steps are used when creating a rule-based model for a complex system. First, one has to observe the system for a while. Analysis of systems depends greatly on the experience that a person has with a similar organization. The human body is created in such a way that it tends to link similar instances together (Barman & Kwon, 2018). As such, when a person sees an abnormal or new occurrence, he or she will most likely describe the happening based on his or her past interaction with a similar situation. As such, watching and experiencing complex systems helps a researcher to have an idea of how variables interact within magnificent organizations making it easy for him or her to have a background upon which to build his or her theories in the future.

Observing a particular system when trying to create a specific model for an organization gives one an idea of the possible relationship between nodes. As such, an individual is able to decide on the best measurement tool to use based on the variables that are suspected to have interdependency. One must become aware of the complex systems to model them hence the need for observation as the first step towards an effective analysis. Moreover, observation brings a clear understanding of the cause and effect within a system.

In complex organizations, it is impossible to clearly capture the causes and effects of happenings within the system because microscopic elements do not have any meaning when they are not interacting with one another (Bianco-Martinez et al., 2016). Simply put, the results of a process cannot be attributed to one particular variable in a complex system since information is found between the various parts of the organization and not within the units themselves. Observation, therefore, helps to get a glimpse of what relationships are likely to produce measurable results.

The second step in creating an ideal model for a complex system is reflecting on the possible rules that might cause the characteristics that were seen in the observation. Similar to the first step, reflect on the rules depends on a person’s experience with a similar situation in the past. The rules determine the best tool to use for network inference (Zou et al., 2011). The third step is deriving predictions from the rules and comparing them with reality. For instance, if a researcher thinks that two variables exist in a mutually beneficial process, he or she must compare that understanding with the realities of complex systems. Again, this step requires one to have a better understanding of magnificent organizations for the proper comparison of the observed rules with reality.

The fourth and the last step towards building an ideal model is repeating the rules until one is satisfied with the results. The predictions made must make sense; otherwise, the examination processes become a failure. As such, a researcher has to repeat the first three steps over and over until a reasonable conclusion is achieved (Arabnia & Tran, 2011). On aspect of complicated systems that cut across the board is the fact that they barely change. The complex nature of the systems makes it hard for leaders and innovators to manipulate operations. As such, an analyst cannot produce ambiguous results when inferring networks within a complex system. The repetition of the steps ensures that the results arrived at are in line with the expectations and the understanding of the world in regards to the organizations.

Rule-based modeling uses the dynamic equation, theories, and first principles to determine the performance of a system at a specific time and describe how it will change over time (Bossomaier, 2016). Other models do not go as far as analyzing the evolutionary possibilities of a system which creates the major differentiation between rule-based models and other approaches. Mostly, quantitative methods are used to determine the future paths of an organization. For instance, the MIR used in the current paper fits as a rule-based model because it quantifies the relationship between two variables to determine both the present and future relationships between non-random nodes.

When creating a model for a complex system, one has to consider other important issues that are not related to the characteristics of the organization (Deniz, 2018). For instance, it is vital for an analyst to determine the kind of questions he or she wants to address. Secondly, one should ask himself or herself at what scale should the behaviors of the observed data be described to answer the key questions. Due to the complexity of the systems, many relationships can be derived from a couple of nodes; therefore, a researcher must be keen not to include too many behaviors whose analysis may not be related to the expected results. One has to look at the microscopic elements of the system and define the dynamical rules for their behavior with an understanding of the questions that need to be answered.

Another important aspect to consider is the structure of the system. While the majority of the complex organizations are similar to some extent, it is vital to understand that a few variations are often created to make each system unique (Sameshima & Baccala, 2014). A researcher must have a clear understanding of these variations if he or she is to come up with an ideal model. Looking at the structure entails analyzing the microscopic components and grouping them in terms of the assumed interaction with one another. After that, a researcher must consider the possible state of the system. That is to say, one has to describe each variable and the dynamical state that each component can take during the system’s operations.

Lastly, researchers must consider the state of the system over time. Complex organizations are characterized by emergence and self-organization, processes that occur over time. In emergence, system properties occur at different scales depending on the operation of the components. The new elements arising at each stage of development must be taken into consideration when coming up with a proper model (Dehmer et al., 2011). One has to critically analyze how these emergent microscopic factors will affect the non-random variables chosen for the study. Additionally, elements in complex systems self-organize over time. A researcher should consider such clustering when deciding the right model for the network inference.

The five steps stated above are not an easy task to accomplish. Coming up with the right choices for each question is not a trivial job and it requires a researcher to repeat the processes for a long time until the behaviors can mimic the key aspects of the system (Ta et al., 2010). To loop the questions, a researcher has to answer a set of other related questions to show the interaction of the chosen components. For instance, one has to consider the scale to use in order to achieve the desired results, what components to include in the analysis, the possible connection between the chosen nodes, the unit of measurement that can produce and easily mimic the expected interactions, as well as the changes over time that the observed variables might produce and under what circumstances. Answering these questions helps an analyst to make a mental prediction about the kind of microscopic behaviors that would arise if the examination is carried out.

Characteristics of a Good Model

A model is ideal for the analysis of a complex system if it is simple. Modeling is about simplicity especially when a mega-organization is involved. Researchers create a model to have a shorter and simple way of describing reality. As such, one should always choose the mechanism that is easier to use when looking at two models of equal predictive power. Simplicity in this sense means that a measure must be able to give a correct interpretation of observed data by eliminating parameters, variables, and assumptions without interfering with the expected behavior (Goh et al., 2018). The MIR tool used in the current study qualifies in the simplistic aspect because it is easy to create and manipulate.

The second most important characteristic of a model is validity. From a practical point of view, a model should produce results that are closely related to the observed reality. For instance, if the assumed relationship between nodes is that the increase of causes a similar reaction to the other microscopic element, the model’s predictions should agree with such an observation if it a reliable tool. The reliability of the MIR is undeniable (Zou et al., 2011). The mechanism has been used widely in mathematical and computer science practices and it has always shown a close relationship between its computations and the observed data. In complex systems, face validity is very important; as such, a tool that does not offer that comfort is useless since due to the constant interaction between variables in mega organizations, it is impossible to use a quantitative comparison between the model prediction and the observational data.

However, regardless of the need to have a valid model, it is important for one to avoiding over-fitting the predictions and the observed data. Adjusting the forecasts of the tool so much to closely agree with the observed behavior makes it hard for the results of the analysis to be generalized (Goh et al., 2018). As mentioned earlier, an understanding of a single complex system can help one make an informed judgment about other similar organizations in the future. As such, network inference results are often generalized when dealing with complex systems but this is not possible in the case of forced correlation between predictions and observable outcomes. One has to strike a balance between simplicity and validity because the two characteristics are equally important. Increasing the complexity of the model to achieve a better fit takes away the simplicity nature of the tool thus rendering it useless.

The last characteristic of a good model is robustness. A model must be selective in terms of which factors interfere with its computation. Sensitivity to minor variations of the model assumptions can have unintended consequences and render the tool useless (Deniz, 2018). Errors are always present when creating a useful tool in the inference of complex system networks. As such, an effective tool must be sensitive enough to capture the major variables while ignoring the interference from non-essential factors in the analysis. For instance, in the current paper, noise is an example of an existing variable whose interference should not be considered while quantifying the relationship between information passed between two nodes and time.

The MIR tool chosen for the study is robust in that it is able to factor in the amount of data shared within a specific unit of time while ignoring issues of noise (Timme & Casadiego, 2014). When a model is sensitive to all minor variations, then the conclusions it provides are unreliable. However, in a robust measurement tool, the final results hold under minor variations of the model assumptions and parameters. A researcher can make sure that the model he or she uses for the analysis of a complex system is robust by manipulating various parameters to balance the level of sensitivity and ensure that only the essential factors are considered in the measurement process.

Dynamical System Theory

All rule-based models operate under the assumptions of dynamical system theory, including the tool used for the current study, MIR. The theory focuses on how organizations change over time instead of looking at their static nature. By definition, a dynamical system is one whose state is uniquely characterized by a set of microscopic elements whose interactions are described by predefined rules (Budden & Crampin, 2016). Understanding these rules helps one to clearly map the present situation and the possible future progression of the system. Most complex systems in the world today are dynamical by nature thus requiring the use of a rule-based model for inference of their networks.

The dynamic nature of the complex systems can be described over discrete time steps or a continuous timeline. In the current paper, the latter mechanism is used to determine the amount of information shared between two non-random variables within a given unit of time. The general mathematical formulas used for such a computation are:

Discrete-time dynamical system

Continuous –time dynamical system

In either case, or x is the state variable of the structure at time t, which may take a scalar or vector value. F is a function that determines the rules by which the system changes its state over time (Bossomaier, 2016). The formulas given above are first-order versions of dynamical systems (i.e., the equations don’t involve xt−2, xt−3, . . ., or d2x/dt2, d3x/dt3, . . .). But these first-order forms are general enough to cover all sorts of dynamics that are possible in dynamical systems, as we will discuss later.

In the current situation, the paper explores the effectiveness of MIR versus MI in terms of how successful they are in inferring exactly the network of our small complex systems. In general, the researcher finds that the MIR outperforms the MI when different time-scales are present in the system (Zou et al., 2011). The results also show that both measures are sufficiently robust and reliable to infer the networks analyzed whenever a single time-scale is present. In other words, small variations in the dynamical parameters, time-series length, noise intensity, or topology structure maintain a successful inference for both methods. It remains to be seen the types of errors that are found in these measures when perfect inference is missing or impossible to be done.

The Use of Python Modeling Tools in Network Inference

Technological advancements have made time-resolved data available for many models but this can only be useful if the right tools are used to analyze the data. Python 2.7 helps the analyst to create simulation models that are effective in capturing the actual situation of the network being inferred thus making the examination process of a complex system easy (IJzendoorn, Glass, Quackenbush & Kuijjer, 2016). The python tool used in the current study is effective because it runs faster than other computer science and mathematical versions and it includes additional features that allow the research to manipulate the used tool (MIR) to produce the intended results. In fact, python 2.7 helps in increasing the reliability of a model by providing an easy way for the involved parties to manipulate variables to create a closer relationship between predictions and observable data with ease.

Using a python tool increases the simplicity and the robustness of a mathematical tool and it has effectively done so in the current paper (IJzendoorn et al., 2016). The approach simplifies some of the complexities found in various models making them usable to people with little or no mathematical of computer science skills. In terms of robustness, python creates avenues for the researcher to organize the measurement tool to react only to important variables while remaining neutral in the presence of non-essential factors such as noise in the current paper. As such, the use of the mathematical modeling tool has made MIR more successful in determining the relationship between information passed between two non-random nodes at a given time and analyzing the effects of synchronization on the performance of the said variables.

Models for Our Complex Systems

The paper uses various topologies for the networks to analyze the various microscopic components of the complex system in question. The network inference, therefore, is carried out from a time-series that are recorded for each component. This is to say that the nodes that are considered to have a reliable relationship are observed and the time series data recorded for further analysis. Since various components are involved, the examinations are divided based on discrete and on continues time-series components.

Discrete-Time Units

The variables that are of the discrete class of complex systems are described and analyzed using the following equation in the paper:

where in is the n-th iterate of map i, where i = 1;…;M and M is the number of maps (nodes) of the system, a_ is the coupling strength, is the binary adjacency matrix (with entries 1 or 0, depending on whether there is a connection between nodes i and j or not, respectively) that defines the structural connectivity in the network, r is the dynamical parameter of each map, is the node-degree, and is the considered map. For the logistic map, where the correspondents are not explicitly mentioned the paper uses r=4 to fully develop chaos for the circle map. In some cases, r=0.35 and k ≈ 6.9115. The paper uses these measures to study the robustness of the methodology for different coupling strengths, observational noise, and data length. Further, small sized networks with discrete dynamics ad different decay of correlation times for the nodes are used to test the methodology used in the current paper. The measurements are carried out to ensure the quality of the inference process by guaranteeing the effectiveness of the tools used for examination.

In discrete dynamics networks, the calculation and the relationships of the nodes are given by logistic maps. The researchers construct a network of two clusters and three nodes each to determine the amount of information shared among the variables within a specific unit of time. The clusters, however, are connected by a small coupling strength link for easy analysis. The clusters are constructed by time-series with different correlation decay times, creating a good example to understand how a clustered network with different time-scales can affect the inference capabilities of MI- or MIR-based methodologies. Specifically, the cluster formed by the first three nodes is constructed using r=4 and the dynamics formed by nodes 4, 5 and 6 is created using a third order composition of the logistic map with r being 3.9.

Network with Continuous-Time Units

The paper uses a continuous dynamic for the nodes of the network described by the HR neuron model. The model is given by:

Where p is the membrane potential, q is associated with the fast currents (N or), and n with the slow current, for example, C. The rest of the parameters are defined as, where is a uniformly distributed random number in (0; 0:5) for all.

Methods

Correlation decay time T (N). T (N) is a necessary aspect in the inference of the topology of a network. However, calculating the correlation decay in a real-life situation is always hard because it depends on quantities such as Lyapunov exponents and expansion rates which require a high computational cost. In the current paper, the values are achieved by estimating the number of iterations that take a point in cells of to expand and completely cover. The approach helps the researchers to quickly and simply determine the time it takes for the correlation o decay to zero. The paper introduces a novel way of calculating T (N) from the network diameter which is mapped from one cell to another.

To construct measurable networks, the researchers assume that each equally sized cell occupied by a single point represents one node within the network. Since the correlation being analyzed in the current paper is the kind that requires the transfer of data from one point to another, the paper creates connections between nodes by following the dynamics of points moving from one cell to another. Specifically, a connection between two variables says m and n exit if points in the third variable cause movement from cell m to n. if a link between the measured elements exists then the weight is equal to 1. Alternatively, if the variables are independent, the weight is 0. Therefore a network is defined as a binary matrix with specific microscopic elements. In the current framework, a uniformly random time-series with n correlation results in a complete network, an all-to-all network.

T (N) is defined as the diameter of G in the current study because T (N) is the minimum time taken for the points being observed to spread fully within a network. As such, the diameter of the system is the maximum length for all the shortest paths which are calculated by looking at the minimum distance required to cross the entire network. The approach used in the current study transforms the calculation of T (N) into the computation of the diameter of G by applying Johnson’s algorithm principles.

Calculation of MIR. To calculate the MIR from the time series data collected over the specified time, the research truncates the summation of the results into a finite size depending on the resolution of the data. Moreover, the paper considers small trajectory pieces of the times-series with a length that depends on the total length of the time series. When calculating probabilities, the paper uses Markova partition to get equal right and left side variables. The length L represents also the largest order T that a partition that generates statistically significant probabilities can be constructed from these many trajectory pieces. Now, taking two partitions, K1 and K2, with different correlation decay times, T1 and T2, respectively, and a different number of cells, N1 _ N1 and N2 _ N2, respectively, with N2 > N1, we have T2 1. Moreover, K1 generates K2 in the sense that, where F is the evolution operator and means the pre-iteration of partition K1.

In order to use the partition close to a Markov equation, the divisions must be of a specific size. This condition can be achieved by constructing partitions with a significantly large number of equally sized cells of length €=1/N. the partitions used in the current paper will however not be Markov or generating and that will probably cause systematic errors in the estimation of MIR. A normalization equation is used to correct these errors in the paper. is a partition-independent quantity if the partitions are Markov, which is not the case in the current study. As such, to get the correct figures, the paper uses an equation which requires calculation of probabilities in Ω fulfilling the inequality. The equation is used in the research where [ is the mean number of points inside all occupied cells of the partition of Ω. The equations used in the current study provide similar results that one would get when MIR is calculated using. The used equation guarantees that the results are not biased.

Network Inference Using MIR. In the current analysis, the use of a non-Markovian partition allows the researchers to simplify the calculations. However, the approach makes the MIR values to oscillate around an expected value. Additionally, the MIR for the non-Markovian partitions has a non-trial dependence with the number of cells in the partition but also represents a systematic error. As such, since the for non-Markovian partition of N N equally sized cells is expected to be an independent partition, the paper proposes a way to obtain a measure computed from . The equation provides partition independence which is suitable for the network inference.

The paper uses the equation M (M-1)/2 to calculate MIR for the different nodes in the network. The practice helps in the inference of the system’s structure. Further, the MIR values are also discarded because the researchers are only interested in the exchange of information between nodes and not other variables. Moreover, the symmetric properties of MIR make it possible for the used mechanism to provide the intended results (Zou et al., 2011). exchange between any two nodes in the network is computed by taking the expected value over different partition sizes. To remove the systematic error, the paper uses a weighted average where the finer partitions contribute more to the value than the coarser ones. A smaller N value is likely to create a partition that is further away from a Markovian one than a partition of a larger value. Further, weighing the different partitions differently in the current paper helps the researchers to eliminate systematic errors.

The novel normalization proposed in the current study has the following principles. First, we use an equally sized grid of size N, we subtract from, calculated for all pairs of nodes, its minimum value and denote the new quantity as min . Theoretically, a pair that is disconnected should have a MIR value close to zero; however, in practice, the situation is different because of the systematic errors coming from the use of a non-Markovian partition, as well as from the information flow passing through all the nodes in the network (Goh et al., 2018). For example, the effects of a perturbation in one single node will arrive at any other node in a finite amount of time. This subtraction is proposed to reduce these two undesired overestimations of MIR. After this step, we remain with MIR as a function of N. Normalizing then by max _ min, where again the maximum and minimum are taken over all different pairs, we construct a relative magnitude, namely,

The paper further applies different grids sizes to obtain the MIR value where the maximum number of cells has been established. The formula produces results that would be achieved using the Markov tool but without the troubles associated with the mechanism. Moreover, the approach helps the researchers not just to analyze the amount of information passed between two non-random variables but also examines the effects of synchronization on the performance of the networks within the complex system. The paper also makes a second normalization at this point to eliminate the errors of the system and reduce the interference of external factors with the microscopic factors. The normalization is achieved using the following formulae:

The above equation is applied to each pair of XY to obtain the average. The higher the value, the higher the amount of information exchanged between the two nodes per unit of time. Moreover, the same formula helps to determine if synchronization of information at one variable interferes with the exchange of data between the other two units. The mechanism allows the researchers to identify the pairs of nodes that transfer a considerable range of information than others. Moreover, to perform network inference from the MIR, the researchers fix a threshold of (0, 1) and create a binary adjacent matrix where the value of the MIR is higher than the threshold. Creating the threshold helps the researchers to infer various networks within the organization at the same time separately and in a comparative way. Based on the results, it is evident that there are intervals of thresholds within the set limits that fulfill a band that represents a 100% successful network inference.

In general, the usefulness of our network inference methodology is measured by the supreme difference between the real topology and the one inferred for different threshold values. We find that whenever there is a band of threshold values, there is successful inference without errors. In practical situations, where the underlying network is unknown and the absolute difference is impossible to compute, the ordered values of the MIR or other similarity measures 2, 3 shows a plateau which corresponds to the band of thresholds aforementioned. In particular, if the plateau is small, the paper uses a method to increase the size of the plateau by silencing the indirect connections, hence, allowing for a more robust renewal of the underlying network.

Results for Network Inference

Discrete systems. In the current study, the performance of the various equations for network inference where the dynamics of each node is described by a circle or a logistic map is carried out using three different models. The network structure that makes up the small-network of interacting discrete-time systems is given by comparing the larger and smaller values exhibited by each node. Here, we analyze the effectiveness of the inference as the coupling strength, a, between connected nodes is varied. The researchers have shown that, for the logistic and circle maps, assuming the same topology, the dynamics is quasi-periodic for a > 0:15 and chaotic for 0 a 0:15. We, therefore, choose the coupling strength in the subsequent tests to be equal to 0.03 and 0.12, both values corresponding to chaotic dynamics.

From the analysis, it is evident that the wider the band, the bigger the probability to perform a complete reconstruction, therefore the reconstruction is more robust. When we deal with experimental data, and the correct topology is unknown, the optimal threshold can be determined by the range of consecutive thresholds for which the inferred topology is invariant. The reconstruction percentage decreases by inferring non-existent links or by avoiding them. However, to reduce the effects of systematic errors on the inference process, each time such an occurrence happens, we decrease the percentage by an amount less than the real links in the original network.

In determining the effects of noise on the times-series lengths, the paper starts by analyzing the effectiveness of MIR for different time-series strengths by the use of the dynamics of the logistic map for each node. When the value is closer to 0.15, a relatively shorter length generated by an adjacent matrix is enough to infer correctly the original network. On the other hand, when the value is closer to 0.03, a larger time-series is needed to a considerable reconstruction. The results of the current research indicate that the successful reconstruction for short-length time series depends on the intensity of the coupling strength. However, it is surprising to see that exact inference can always be achieved for this dynamical regime if a sufficiently large time-series is available. The best reconstruction using MIR is for coupling strengths in a dynamic regime where chaotic behavior is prevalent.

Neural Networks. In continuous dynamics analysis given by the HR system, the researchers use two electrical coupling mechanisms both of which consider the time-series strengths of the involved variables. Based on the findings, it is clear that MIR is able to infer the correct network structure for small networks of continuous time interacting components.

Comparing Mutual Information and Mutual Information Rate. Finally, the researchers compare MI and MIRXY to assess the effectiveness of our proposed methodology for network inference. The same normalization process is used for MIR to MI to have an appropriate comparison. In particular, we infer the network structure of the system. The different dynamics of the two groups produce different correlation decay times, T (N), for nodes X and Y, in particular, when the pair of nodes comes from different clusters. The different correlation decay times produce a non-trivial dynamical behavior that challenges the MI performance for network inference.

In this paper, we have introduced new information based mechanism to infer the network configuration of complex systems. The MIR is an information measure that computes the information transmitted per unit of time between pairs of components in a complex system. The results show that MIR is a vigorous measure to perform network supposition in the presence of additive noise, short time-series, and also for systems with different pairing strengths. Since MIR and MI depend on the parallel decay time T, they are suitable for inferring the correct topology of networks with different time-scales. In particular, we have explored the efficacy of MIR versus MI in terms of how triumphant they are in inferring exactly the network of our small complex systems. In general, we find that the MIR outperforms MI when different time-scales are present in the system. Our results also show that both procedures are satisfactorily robust and reliable to infer the networks analyzed every time a single time-scale is present.

References

Arabnia, H. & Tran. (2011). Software tools and algorithms for biological systems. New York: Springer.

Barman, S. & Kwon, Y. (2018). A Boolean network inference from time-series gene expression data using a genetic algorithm. Bioinformatics, 34 (17), 927-933.

Bianco-Martinez, E., Rubido, N., Antonopoulos, G. & Baptista, M. (2016). Successful network inference from time-0series data using mutual information rate. Chaos: An interdisciplinary journal of nonlinear science, 26 (4), 89-93.

Bossomaier, T. (2016). An introduction to transfer entropy: information flow in complex systems. Cham, Switzerland: Springer.

Budden, D. M., & Crampin, E. J. (2016). Information theoretic approaches for inference of biological networks from continuous-valued data. BMC Systems Biology10(1), 89.

Butte, A. &Kohane, I. (2000). Mutual information relevance networks: Functional genomic clustering using pairwise entropy measurements. Pac. Symp. Biocomput, 5, (3), 415–426.

Dehmer, M., Streib, F. & Mehler, A. (2011). Towards an information theory of complex networks: statistical methods and applications. Basel Boston: Birkhauser.

Deniz. D. (2018). Transfer Entropy. Place of publication not identified: MDPI – Multidisciplinary Digital Publishing Institute.

Goh, Y., Hasim, H. & Antonopoulos, C. (2018) Inference of financial networks using the normalized mutual information rate. PLoS ONE 13(2): e0192160. 

IJzendoorn, D. G., Glass, K., Quackenbush, J., & Kuijjer, M. L. (2016). PyPanda: a Python package for gene regulatory network reconstruction. Bioinformatics (Oxford, England)32(21), 3363-3365.

Sameshima, K. & Baccala, L. (2014). Methods in brain connectivity inference through multivariate time series analysis. Boca Raton, FL: CRC Press.

Shandilya, S. &Timme, M. (2011). Inferring network topology from complex dynamics.New J. Phys. 13, 013004.

Ta, X., Yoon, N., Holm, L. &Han, S. (2010). Inferring the physical connectivity of complex networks from their functional dynamics.BMC Syst. Biol. 4(70), 1–12.

Timme, M. &Casadiego, J. (2014). Revealing networks from dynamics: An introduction.J. Phys. A: Math.Theor. 47, 343001.

Zou, Y., Romano, M., Thiel, M., Marwan, N. &Kurths, J. (2011). Inferring indirect coupling by means of recurrences.Chaos 21(4), 1099–1111

GET THE COMPLETED ASSIGNMENT

ASSIGNMENT COMPLETED AT CapitalEssayWriting.com

MAKE YOUR ORDER AND GET THE COMPLETED ORDER

CLICK HERE TO ORDER THIS PAPER AT CapitalEssayWriting.com ON  Network Inference from Time-Series Data Using Information Theory Tools

NO PLAGIARISM, Get impressive Grades in Your Academic Work

Categories
Writers Solution

Great Depression and the 2007-2009 recession using economic theory and yours reading

 ASSIGNMENT IN FINANCE/ECONOMICS.Contemporary Issues in Global Finance (length should be about 8-9 pages double spaced pages)
I. Read the following statement and answer the following question on the Great Depression and the 2007-2009 recession using economic theory and yours reading.
Nobel Prize winning economist Paul Krugman in his column in the New York Times on November 7, 2010, entitled “Doing It Again” wrote: “Eight years ago, Ben Bernanke, already a governor at the Federal Reserve although not yet chairman, spoke at a conference honoring Milton Friedman. He closed his talk by addressing Friedman’s famous claim that the Fed was responsible for the Great Depression, because it failed to do what was necessary to save the economy. “
“You’re right,” said Mr. Bernanke, “we did it. We’re very sorry. But thanks to you, we won’t do it again.” Famous last words. For we are, in fact, doing it again.
Q: Did we in fact “do it again” as Krugman claimed? More specifically:
1. (1) Was the Fed’s policy stance the same in the recent recession as it was in the Great Depression?
2. (2) Was the outcome – the magnitude and duration of the recessions – the same in the two episodes?
3. (3) If so, why? If not, why not?
II. Consider this quote from Adam Smith
“The man of system…is apt to be very wise in his own conceit; and is often so enamored with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it… He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chessboard. He does not consider that in the great chessboard of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might choose to impress upon it.” The Theory of Moral Sentiments, Part VI Section II, Chapter II, pp. 233-4, para 17.
Q: (1) Discuss Smith’s statement and compare it to the views espoused by Friedrich Hayek on economic organization.
(2) Are Smith and Hayek of similar mindsets?
III. Consider the following statement: Because there is a stable tradeoff between inflation and unemployment, the Federal Reserve can reliably decrease unemployment by simply producing a higher rate of inflation through its monetary policies?
Q :(1) Is this statement true? Explain your answer based on your understandings of both theory and empirical evidence.
(2) Suppose the Federal Reserve, continually tries to push unemployment lower. What are the possible consequences?

WE HAVE DONE THIS QUESTION BEFORE, WE CAN ALSO DO IT FOR YOU

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

 ASSIGNMENT IN FINANCE/ECONOMICS on Great Depression and the 2007-2009 recession using economic theory and yours reading

TO BE RE-WRITTEN FROM THE SCRATCH

Categories
Writers Solution

create a frontend application using Vue, HTML, CSS Flex/Grid, and JavaScript

CSCI 4441-01 / CSCI 6655-01

Web-Database Application Development Fall 2021 || Final Project

The objective of this project is to test you in both frontend and backend knowledge. In this project, you are going to create a frontend application using Vue and set up a server using NodeJS/ExpressJS

An abstract view of your final project:

There are two major deliverable components in your project:

1.      Frontend: You need to create a frontend application using Vue, HTML, CSS Flex/Grid, and JavaScript. The feature of the application should be as follows:

a.      It should be a single page application

b.      An application must be developed using Vue

c.      The application should be broken down into at least more than two components in the root.

d.      The application must have data passing from one component to another

e.      The data must be coming from your Node Server deployed in Heroku (or any other platform you would like to host your node server NOT IN YOUR LOCALHOST)

f.       If I run your Vue application from my laptop, it should access data from your server without any hassle.

g.      Your application must have more than 4 types of data. And, your application should be fetching more than 5 data. (refer to class exercise code)

2.      Server: You need to set up a server using NodeJS and deploy it to Heroku.

a.      Your server homepage must be your portfolio (one that you build on your midterm). It means that when you open your https://something.com/ it should show your website

b.      The JSON data must be in the yourwebsite.com/api . Your frontend application should be fetching the data from this URL.

c.      Your public folder should contain your portfolio website + json file

What are your deliverables?

1.      Wireframe for your frontend application

2.      Your frontend Vue application

3.      Your Node JS Server hosting your portfolio website and the API.

How to submit your work?

1.      Please attach your wireframe, the link to your server in the pdf file, your GitHub link

·        Please make two separate repositories on GitHub for your frontend code and backend code.

2.      You need to zip your Vue application code in one zip folder <yourName>_vue_.zip. Please make sure to zip the whole folder. Your application should start when I run it using npm run serve in CLI

3.      You need to zip your NodeServer code in one zip folder <yourName>_node_zip. Please make sure to zip the whole folder.

4.      Zip them all (pdf, Vue zip, node zip) together in another zip folder and then submit it.

Extra Credit:

Using MongoDB: You can get your data from the MongoDB server and host that data in JSON format in your server.

Score Distribution: Total 110

Particular

Grade Distribution

Frontend application(Vue)

50

Backend Application (Server, API, Portfolio)

50

GitHub Repo

10

Total

110

Extra Credit

15 (MongoDB)

WE HAVE DONE THIS QUESTION BEFORE, WE CAN ALSO DO IT FOR YOU

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

GET SOLUTION FOR THIS ASSIGNMENT

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

NO PLAGIARISM

  • Original and non-plagiarized custom papers- Our writers develop their writing from scratch unless you request them to rewrite, edit or proofread your paper.
  • Timely Delivery- primewritersbay.com believes in beating the deadlines that our customers have imposed because we understand how important it is.
  • Customer satisfaction- Customer satisfaction. We have an outstanding customer care team that is always ready and willing to listen to you, collect your instructions and make sure that your custom writing needs are satisfied
  • Confidential- It’s secure to place an order at primewritersbay.com We won’t reveal your private information to anyone else.
  • Writing services provided by experts- Looking for expert essay writers, thesis and dissertation writers, personal statement writers, or writers to provide any other kind of custom writing service?
  • Enjoy Please Note-You have come to the most reliable academic writing site that will sort all assignments that that you could be having. We write essays, research papers, term papers, research proposals

Get Professionally Written Papers From The Writing Experts 

Green Order Now Button PNG Image | Transparent PNG Free Download on SeekPNG
Categories
Writers Solution

Using the Malthusian model, explain why a one-off improvement in technology does not increase living standards in the long-run

 two paragraphs each

1.Using the Malthusian model, explain why a one-off improvement in technology does not increase living standards in the long-run.

2. how do the law of one price and studies of market integration shed light on the causes of the Great Divergence

3. Explain the relationship between the EMP and the development of labor markets according to De Moor and Van Zanden.

bottow below ( 800 words each) choose one — whichever easier

4. What is the Great Divergence debate about?  What do the various sides in this debate agree and disagree about? What does the latest empirical evidence suggest about the timing of the Great Divergence?

5.What is Smithian economic growth? Provide some examples of societies that experienced Smithian economic growth. Discuss why these episodes did not give rise to sustained economic growth

WE HAVE DONE THIS QUESTION BEFORE, WE CAN ALSO DO IT FOR YOU

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

GET SOLUTION FOR THIS ASSIGNMENT

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

NO PLAGIARISM

  • Original and non-plagiarized custom papers- Our writers develop their writing from scratch unless you request them to rewrite, edit or proofread your paper.
  • Timely Delivery– primewritersbay.com believes in beating the deadlines that our customers have imposed because we understand how important it is.
  • Customer satisfaction- Customer satisfaction. We have an outstanding customer care team that is always ready and willing to listen to you, collect your instructions and make sure that your custom writing needs are satisfied
  • Confidential- It’s secure to place an order at primewritersbay.com We won’t reveal your private information to anyone else.
  • Writing services provided by experts- Looking for expert essay writers, thesis and dissertation writers, personal statement writers, or writers to provide any other kind of custom writing service?
  • Enjoy Please Note-You have come to the most reliable academic writing site that will sort all assignments that that you could be having. We write essays, research papers, term papers, research proposals Using the Malthusian model, explain why a one-off improvement in technology does not increase living standards in the long-run

Get Professionally Written Papers From The Writing Experts 

Green Order Now Button PNG Image | Transparent PNG Free Download on SeekPNG
Categories
Writers Solution

Globalization: Critically assess the impact of Globalization by using the three Globalization readings in unit 3 and the film El Contrato by Min Sook Lee

 1000 words.GNED 101: INTRODUCTION TO ARTS & SCIENCES
Essay (15%)
Instructions: Choose one topic from the writing prompts below. Then, develop a strong thesis, which you will prove with your essay. Your essay should be approximately 1000 words.
Research: Make sure to provide any definitions where necessary, and use evidence from your readings to make your arguments. Students who do not reference at least one reading (some questions require more than one) in their answer will score very poorly. You are also required to use a minimum of two resources outside your readings. These sources must be credible and appropriate for an academic paper (ie. academic journals, newspaper articles, reputable online sources, etc.). You may also use other readings available on Blackboard that were not assigned. You must cite all sources; any exams with improper or missing citations will receive a zero grade.
Format: Make sure to create a title page with your full name, student number, and an interesting title. Please use APA style for all aspects of this assignment.
Due Date: Please see the Critical Path.
Submissions: All submissions must be uploaded to the assignment dropbox (see Assignments on Blackboard).
________________________________________________________________________________
Choose ONE of the following topics
• Globalization: Critically assess the impact of Globalization by using the three Globalization readings in unit 3 and the film El Contrato by Min Sook Lee (link to the film in Course Readings – Unit 3 folder) as your example. You will need to read the globalization articles and watch the film in order to complete this question. In addition, find at least two other similar examples of issues raised in the readings and film.
• Two views of the media: How should we evaluate the effects of media? In your essay, explore the arguments presented in the article “Two Historical Views of the Media” by O’Shaughnessy and Stadler, and apply them to a current example (or set of examples). Your essay should focus on the role the media play in shaping how we see and understand the world around us. You may consider traditional media like newspapers, radio, or television; or new media, such as popular and influential social networking sites like Facebook, Twitter, Reddit, or YouTube.
• Climate Change: Explore themes raised in the Davidson article in the light of recent media coverage on global warming (severe storms, oil sands, melting arctic ice, etc.). Discuss the causes of climate change, its impact on our lives and on the planet and our power to combat it. Use at least two academic or news articles to supplement the course reading.
Essay Evaluation Rubric
Unsatisfactory
Needs Improvement
Average
Excellent
Content
40%
• Misunderstands or ignores critical course concepts, or does not address assigned topic.
• Does not establish a clear thesis.
• Understanding of some course concepts is significantly flawed or not clearly. demonstrated.
• Does not create a strong thesis and/or leaves thesis unproven.
• Good understanding of most concepts, with minor errors or gaps.
• Creates clear thesis, but may struggle to support thesis at times.
• Excellent understanding of concepts.
• Thoroughly responds to topic with a clear, well-supported thesis statement.
Analysis
20%
• Does not draw upon real-world examples.
• Draws upon real-world examples but with some confusion/gaps.
• Connects course themes to examples, but misses some links.
• Relates themes issues from readings to chosen example(s).
References*
15%
*Please note that an Unsatisfactory grade in this category will result in the overall failure of this assignment.
• Does not reference assigned readings.
• Does not refer to outside sources.
• Makes passing reference to reading.
• Refers to outside source(s) that are not appropriate for academic writing.
• Draws upon one or two course readings, as required.
• Does not make satisfactory use of outside sources.
• Applies and critically expands upon course readings to support his or her argument.
• Uses at least two academic-level outside sources to support thesis.
Structure
10%
• Essay lacks clear organization (ie. introduction, body, conclusion).
• Essay is too brief.
• Confusing or incomplete essay structure.
• Slightly below the word count requirement,
• Makes an effort to form an organized and logical essay structure.
• Length is appropriate.
• Organized, logical and polished essay structure.
• Length is appropriate.
Mechanics
10%
• Frequent major and minor grammar errors impair clarity.
• Some major and many minor grammar errors.
• No more than one major, or five minor grammar errors.
• Virtually error free.
Format
5%
• No APA style.
• Imperfect APA style.
• APA style is used with errors.
• APA style is virtually error free.

WE HAVE DONE THIS QUESTION BEFORE, WE CAN ALSO DO IT FOR YOU

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

GET SOLUTION FOR THIS ASSIGNMENT

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

NO PLAGIARISM

  • Original and non-plagiarized custom papers- Our writers develop their writing from scratch unless you request them to rewrite, edit or proofread your paper.
  • Timely Delivery– primewritersbay.com believes in beating the deadlines that our customers have imposed because we understand how important it is.
  • Customer satisfaction- Customer satisfaction. We have an outstanding customer care team that is always ready and willing to listen to you, collect your instructions and make sure that your custom writing needs are satisfied
  • Confidential- It’s secure to place an order at primewritersbay.com We won’t reveal your private information to anyone else.
  • Writing services provided by experts- Looking for expert essay writers, thesis and dissertation writers, personal statement writers, or writers to provide any other kind of custom writing service?
  • Enjoy Please Note-You have come to the most reliable academic writing site that will sort all assignments that that you could be having. We write essays, research papers, term papers, research proposals Globalization: Critically assess the impact of Globalization by using the three Globalization readings in unit 3 and the film El Contrato by Min Sook Lee 

Get Professionally Written Papers From The Writing Experts 

Green Order Now Button PNG Image | Transparent PNG Free Download on SeekPNG
Categories
Writers Solution

Using your knowledge of BITS, determine the functional dependencies that exist in the
following table.

Exercise “Project” -2
1. Using your knowledge of BITS, determine the functional dependencies that exist in the
following table. After determining the functional dependencies, convert this table to an
equivalent collection of tables that are in third normal form:



2. List the functional dependencies in the following table that concern invoicing (an
application BITS is considering adding to its database), subject to the specified
conditions. For a given invoice (identified by the InvoiceNum), there will be a single
client. The client’s number, name, and complete address appear on the invoice, as does
the date. Also, there may be several different tasks appearing on the invoice. For each
task that appears, display the TaskID, description, category, and price. Assume that each
client that requests a particular service task pays the same price. Convert this table to an
equivalent collection of tables that are in third normal form:



3. BITS wants to store information about the supervisors, including their supervisor number
and the relationship to consultants. Supervisors can work with multiple consultants, but
consultants only have one supervisor. In addition, supervisors specialize in working with
clients in specific task categories. Using this information, convert the following
unnormalized relation to fourth normal form:

WE HAVE DONE THIS QUESTION BEFORE, WE CAN ALSO DO IT FOR YOU

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

GET SOLUTION FOR THIS ASSIGNMENT

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

NO PLAGIARISM

  • Original and non-plagiarized custom papers- Our writers develop their writing from scratch unless you request them to rewrite, edit or proofread your paper.
  • Timely Delivery- primewritersbay.com believes in beating the deadlines that our customers have imposed because we understand how important it is.
  • Customer satisfaction- Customer satisfaction. We have an outstanding customer care team that is always ready and willing to listen to you, collect your instructions and make sure that your custom writing needs are satisfied
  • Confidential- It’s secure to place an order at primewritersbay.com We won’t reveal your private information to anyone else.
  • Writing services provided by experts- Looking for expert essay writers, thesis and dissertation writers, personal statement writers, or writers to provide any other kind of custom writing service?
  • Enjoy Please Note-You have come to the most reliable academic writing site that will sort all assignments that that you could be having. We write essays, research papers, term papers, research proposals 

Get Professionally Written Papers From The Writing Experts 

Green Order Now Button PNG Image | Transparent PNG Free Download on SeekPNG





Categories
Writers Solution

The membership records in the Membership table can be updated using an UPDATE statement. Such a statement can update any non-PK column value including the membership levels

COMPUTING
Database Programming and Implementation
Please Print Clearly In CAPITALS
Surname
First Name
Student ID
Signature
Student Code of Conduct
University students have a responsibility to be familiar with the Student Code of
Conduct: https://policies.mq.edu.au/document/view.php?id=233
Student Support
University provides a range of support services for students. For details, visit http://students.mq.edu.au/support/
The background knowledge for the assignments is given in the textbook(s), lectures, any other components of the unit, in the prerequisite units ISYS114 or COMP1350, and in the readings provided on ilearn. However, some parts of the assignments may not be answered without prior independent research and/or searching for other sources of information.
This assignment concerns database programming and implementation. It will be marked out of 100 and will contribute 10% towards your final grade. It consists of developing procedures and triggers in MySQL, creating and populating the database tables, and running test scripts against the tables. The description of the Problem domain is given below.
1 Problem Domain
The context of this Assignment is the same as for Assignment 2, namely the Magic Ale (MA). This has been reproduced as is in the Appendix for your convenience.
A DDL script (A3createDB.sql) for creating the corresponding database, and a DML script (A3populateDB.sql) for populating this database with some sample data are being provided in the Assignment 3 folder.
2 Task Specifications
Task 1 (10 marks)
Create the tables in the Magic Ale database by running the DDL script provided in the ‘Assignment 3’ folder. Then insert some sample records into the tables by running the provided DML script. Verify that the tables are created and populated as intended.
Task 2 (30 marks)
The membership records in the Membership table can be updated using an UPDATE statement. Such a statement can update any non-PK column value including the membership levels, but the Magic Ale has certain rules about membership level upgrades:
• Only those members with a non-expired membership can receive an upgrade.
• Only the SILVER members can be upgraded to the GOLD level.
• Only the GOLD members can be upgraded to the PLATINUM level.
• There is no further upgrade for the PLATINUM members.
You will write a BEFORE UPDATE trigger called CHECK_MEMBERSHIP_UPDATE which fires when a record is attempted to be updated in the Membership table. The trigger has to check the conditions above to make sure that they are satisfied by the update request. If the above conditions are satisfied, then the UPDATE statement is allowed to proceed. Otherwise, a meaningful message needs to be displayed to the user.
Note that a membership level can also be downgraded in a similar fashion but you are not responsible for checking the downgrading rules.
Task 3 (30 marks)
In this task, you will write a procedure called BrandNameCampaign which takes a brand name as input and creates a new campaign with the top 5 most expensive products with that brand name. The campaign will have a 4 week duration and will start after exactly two weeks of its creation. For the campaign, the SILVER level members will receive a 10% discount, the GOLD level members 20% and the PLATINUM level members 30%. If there are five or fewer products with that brand name, all those products will be included in the campaign.
Task 4 (30 marks)
This task involves testing the code developed in Tasks 2 & 3.
Part (a) (10 marks) First you are required to test the programs you wrote against the sample data provided as part of Task 1 to see if they work. These data constitute a minimal test against a very small number of records and are unlikely to demonstrate the full functionality of your programs.
Part (b) (20 marks) Next you carry out a more extensive test by testing the programs against a larger set of records that are designed by you to easily expose any flaws in your programs. You do that by deleting records, adding records, or modifying the records in other ways, and then calling different procedures and/or firing the trigger.
3 Report Specification
You will also prepare and submit a report (in the PDF format). A word file template for this purpose will be provided which you will complete, convert to pdf, and submit. The file you submit will be named: yourLastname_yourFirstname _report.pdf.
Your report should have the following sections:
1. The initial State of the database as created in Task 1: Paste to the word file the screen shots showing the provided sample data in the tables. Do not change any of the table or column names given in the provided DDL script.
2. Stored Programs: Paste into this section the programs you wrote (the contents of the SQL file yourLastname_yourFirstname _programs.sql that you prepared for Tasks 2 & 3).
3. Required Testing against the sample dataset as required in Task 4 Part (a): Paste into this section your SQL statements for the initial tests you ran (one by one) and then the corresponding results as screenshots. Also place your SQL statements into a file called yourLastname_yourFirstname _testscript.sql
4. More Extensive Testing as required in Task 4 Part (b): Explain what sort of changes you are going to make to which tables, what tests you are going to run, and why. Paste into this section your SQL statements for the extensive tests you ran (one by one) and then the corresponding results as screenshots. Also place all of your SQL statements into yourLastname_yourFirstname _testscript.sql.
5. Notes (optional): In this section, you might wish to note anything, such as whether you faced any particular difficulty in completing any of these tasks, the nature and extent of any help you received from anyone, and why.
Remember to convert the report Word file to pdf and submit only the pdf file.
4 Your Submission
You will submit three files:
1. yourLastname_yourFirstname _report.pdf.
2. yourLastname_yourFirstname _programs.sql
3. yourLastname_yourFirstname _testscript.sql.
You will submit the files in two stages. In the first stage, as a minimum, you must submit the following two draft files by Tuesday, October 26, 2021, 11:55 PM (Week 12):
a) yourLastname_yourFirstname _programs.sql including either the trigger
CHECK_MEMBERSHIP_UPDATE or the procedure BrandNameCampaign in it, and b) yourLastname_yourFirstname _report.pdf, with complete Section (1) and partially complete sections (2) and (3).
You can modify these files while preparing your final version.
The final version of these three files must be submitted by Friday, October 29, 2021, 11:55 PM.
Note Regarding Draft Submission. You are strongly suggested to submit a draft of your work by the “Draft Submission Due Date”. Students who have not submitted a draft will not qualify for special consideration should they not be able to submit by the deadline due to technical issues such as failure to connect to the Database Server.
Late Submission Policy. No extensions on assignments will be granted without an approved application for Special Consideration.
Late submissions will be accepted up to three days after the deadline, but there will be a deduction of 10% mark for each day (whole or part) of delay, unless you have received special consideration. If special permission is granted for a delay of more than three days, the student’s mark for this assignment will be calculated based on their overall performance in the Final Exam. Please see the Unit Guide for details.
Appendix: Problem Context from Assignments 1 & 2
This assignment concerns a liquor shop chain in Sydney, called The Magic Ale (MA). The objective of this assignment is to develop a database system that will be used to centrally store and manage all relevant information for the branches of MA.
The information to be stored include information on different branches of MA (Bankstown,
Hornsby, etc.), types of drinks they sell (beers, wines, cedars, etc.), staff they employ (Retail
Assistants, Shelving Assistants, etc.), Magic Members (MA Loyalty Card holders), and Sales Campaigns (discounts on specific products over a limited period). The basic requirements gathered from the stake holders is presented in the following five points. As typically the case, these requirements are often underspecified. Use your judgment in interpreting them when required, and keep a note of the assumptions you made.
1. Branch Information: The MA System shall keep information on each branch including its name and address, and the number of employees who work there. The system shall also contain information on which days (Mon-Sun) the branch is open, and opening hours. It will also keep information on opening hours (e.g., Mon-Fri 10:00AM-5:30PM; Sat 9:00AM-9:00PM; Sun Closed).
2. Product Information: The system shall contain relevant information on products of different types at the “item level”, such as: type (wine/beer/spirit/…), packaging info (can/bottle/…), volume (e.g., 375ml X 6 pack), price, and brand (e.g. Tooheys Old Dark Ale), as well as current stock level.
3. Staff Information: The system shall record information on staff members who work at different branches of MA. This will include their roles, type of employment (e.g. permanent, casual), salary (annual or hourly depending on permanent or casual), as well as who they report to.
4. Membership Information: The system shall record information on magic members, including type of membership (Platinum/Gold/Silver), and when the membership will expire. 5. Sales Campaign Information. The system shall keep information on sales campaign. Assume that these campaigns are global (same across all branches of MA). It will have information of the form: campaign start date and campaign end date, what items are on sale, and the discount for customers based on their membership (e.g., nonmembers 10%, Silver 15%, and Platinum/Gold 20%)
5. Sales Campaign Information: The system shall keep information on sales campaign. Assume that these campaigns are global (same across all branches of MA). It will have information of the form: campaign start date and campaign end date, what items are on sale, and the discount for customers based on their membership (e.g., nonmembers 10%, Silver 15%, and Platinum/Gold 20%)

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

GET SOLUTION FOR THIS ASSIGNMENT

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

NO PLAGIARISM

  • Original and non-plagiarized custom papers- Our writers develop their writing from scratch unless you request them to rewrite, edit or proofread your paper.
  • Timely Delivery- primewritersbay.com believes in beating the deadlines that our customers have imposed because we understand how important it is.
  • Customer satisfaction- Customer satisfaction. We have an outstanding customer care team that is always ready and willing to listen to you, collect your instructions and make sure that your custom writing needs are satisfied
  • Confidential- It’s secure to place an order at primewritersbay.com We won’t reveal your private information to anyone else.
  • Writing services provided by experts- Looking for expert essay writers, thesis and dissertation writers, personal statement writers, or writers to provide any other kind of custom writing service?
  • Enjoy Please Note-You have come to the most reliable academic writing site that will sort all assignments that that you could be having. We write essays, research papers, term papers, research proposals

Get Professionally Written Papers From The Writing Experts 

Green Order Now Button PNG Image | Transparent PNG Free Download on SeekPNG
Categories
Writers Solution

Discuss whether you are currently using virtualization or cloud computing in your professional job or for personal use

Discuss whether you are currently using virtualization or cloud computing in your professional job or for personal use. Describe what virtual software or cloud services you are using and how you are using them. If you are not currently using these, then based on what have learned in this module discuss whether you think either technology is beneficial. Discuss how you might be able to use them the future.

300 words

No APA Format 

Citations and references required 

GET SOLUTION FOR THIS ASSIGNMENT, Get Impressive Scores in Your Class

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

GET SOLUTION FOR THIS ASSIGNMENT

CLICK HERE TO MAKE YOUR ORDER

TO BE RE-WRITTEN FROM THE SCRATCH

NO PLAGIARISM

  • Original and non-plagiarized custom papers- Our writers develop their writing from scratch unless you request them to rewrite, edit or proofread your paper.
  • Timely Deliveryprimewritersbay.com believes in beating the deadlines that our customers have imposed because we understand how important it is.
  • Customer satisfaction- Customer satisfaction. We have an outstanding customer care team that is always ready and willing to listen to you, collect your instructions and make sure that your custom writing needs are satisfied
  • Confidential- It’s secure to place an order at primewritersbay.com We won’t reveal your private information to anyone else.
  • Writing services provided by experts- Looking for expert essay writers, thesis and dissertation writers, personal statement writers, or writers to provide any other kind of custom writing service?
  • Enjoy Please Note-You have come to the most reliable academic writing site that will sort all assignments that that you could be having. We write essays, research papers, term papers, research Discuss whether you are currently using virtualization or cloud computing in your professional job or for personal use

Get Professionally Written Papers From The Writing Experts 

Green Order Now Button PNG Image | Transparent PNG Free Download on SeekPNG