Comment: For your convenience, I will offer some Theoretical Exercises and Computational Exercises below. Not all will be assigned.
Your homework graphs and tables should be professional looking, like those you would use in a presentation or submit with a paper you were trying to publish. Annotate your graphs appropriately. Add appropriate commentary to any tables. Nice looking tables and graphs are not cluttered with text: If you have long comments, include them as comments in your program files. (Ordinarily you will have long comments following the creation of any figure.) Note that the natural place to put any notes or short comments that belong with a table is under the results (not up by the date). This is often true with graphs too.
For each assignment, you will email me a program file. I will run the program file when I receive it; if it doesn't run I will not grade it. Make sure you use the filenames I suggest, and make sure you test-run the program immediately before emailing it. Make the subject line of your email: LastName: Econ-xxx HW#x. Also, always keep a copy of your program and of your email.
Note that I cannot review your programs before you submit them. However I am happy to answer any specific question about any homework assignment, whether or not it is a programming question. It is ideal if you ask this question on the class email list, so that everyone in the class can profit from your question and my answer.
It is a very important habit to never alter your raw data, so keep the data files I give you inviolate.
Program files should contain a series of comments that explain precisely what you are doing. A comment is not part of the program code.
Author: Your Name
Date: yyyy-mm-dd
Estimated time required for completion: 2-3 hours.
Most common errors:
failure to add comments fully describing each new command or option,
and failure to determine the units of rgdp_pc.
The primary purpose of this assignment is to introduce you to graphing data and to familiarize you with two important macroeconomic time series. There is an datafile for this assignment. Turn in your homework as a single program file by email.
Download the data file and description. (Be sure to download the data; do not copy and paste!) I will assume you saved these to a network drive on campus (G:), so that it is g:\macro\macro1.dat.
Note there is Python assistance and EViews assistance for this exercise.
This project is an exploration of very simple formulations of Okun's Law. You will use many commands that you have learned already, along with a few new ones. As always, I want you to add comments to your program file for each command you use. The comments should show that you understand what the command is doing, and you should pay special attention to explaining any arguments (including optional arguments) that are used. After you have explained the use of a command or a command option one time in detail, you can offer very short comments should it be used again.
y_ct
.
Make a nice table of results and interpret it.
E.g., what is the interpretation of the estimated slope in this regression?
(Add a note to your results table explaining this,
and add a comment explaining what the units are.)
Note there is Python assistance and EViews assistance for this exercise.
Estimated time required for completion: 2 hours.
The unit roots assignment introduced you to some of the properties of unit root processes. In this exercise, we apply this to long-run inflation data for the US. The first thing you will need to do is get some CPI data, We will use a long-run CPI series from the Federal Reserve Bank of Minneapolis.
Your first task is to get CPI data from the Minneapolis Fed.
Download the long-run US price data from 1800.
(If this has disappeared,
you may download the long-run US price data from 1913.)
Create a data set named dpuroot.dat
that has
the year in the first column and the CPI in the second column,
separated only by spaces. (No header: just numbers!)
Also create a file dpuroot_readme.txt
that gives a full description of the data,
including its source and the format of your data file.
Load the CPI data so that you can work with it: use the name cpi. Add a comment to your program file summarizing how you loaded the data.
Check your data visually: make sure it makes sense (no missing values, odd values, etc).
Plot the inflation rate over time. Make sure you produce a professional looking line graph, adding text as needed to make it understandable.
Sargent (1971 JMCB) argued that U.S. inflation experience suggested the U.S. inflation rate is stationary (i.e., does not contain a unit root).
Visually inspect your graph and add a comment (to your program) explaining what you think of his assessment of the stationarity of the inflation rate.
Note: you need not review the Sargent article for this exercise.
Use an augmented Dickey-Fuller test to check your visual assessment of the stationarity of the inflation rate. Display your Dickey-Fuller results. Add evaluative program comments.
Provide a final long comment explaining how to interpret your results. (Are shocks to the inflation rate temporary or permanent? What might explain this?)
Might the "permanent" abandonment of the gold standard in 1973 have changed the fundamental behavior of inflation? Break the sample in two and reconsider the stationarity of inflation over each subsample.
Turn in your your program file. (As usual, make sure your program file contains all of your comments and shows all your graphs and tables, which will of course be fully annotated.)
Note there is Python assistance and EViews assistance for this exercise.
Using nominal and real GDP data from FRED,
replicate the Lucas (1973 AER) estimates of his equations (11) and (12) for the U.S.
Use the series GDP and GDPC1.
Create an data set named lucas1973.dat
that has
GDP in the first column and GDPC1 in the second column,
separated only by spaces.
(You need not change the frequency.)
Put full documentation of your data in lucas1973_readme.txt
.
Your program should begin by loading the data from this data set,
which should be in the same folder as your program.
Follow Lucas's procedures for transforming the data.
(It is all in the article.)
Comment on your results.
(E.g., How good of a match do you get?)
Use recursive least squares to examine the recursive coefficient estimates.
What do you learn from this?
What data transformations other than the one chosen by Lucas might be appropriate.
Note there is Python assistance and EViews assistance for this exercise.
From the Penn World Table version 6.2,
retrieve "Real GDP Chain per worker" for all countries,
for the years 1960, 1990, and 2000.
Use the CSV format.
Use this data to create pwt62hw.csv
:
just copy the data header and data and nothing else into a text file of that name.
(The first line of the file should be the header.
The last line of the file should not be blank:
it should be the last line of your data.)
Also create pwt62hw_readme.txt
,
which will contain your detailed description of the data.
Note there is Python assistance and EViews assistance for this exercise.
The Permanent Income Hypothesis is often tested by testing for excess sensitivity.
Null Hypothesis:
The change in consumption is not sensitive to predictable changes in current disposable income.
Turn in a program file that generates all of your results, with all your tables and graphs appropriately labeled and commented. Limit the amount of text on a graph so that you can make it look professional, like something you would include in a presentation. Additional written analysis may be included as comments in your program file.
Note there is Python assistance and EViews assistance for this exercise.
This exercise assumes you have installed Python version 2.5.1 (or higher), NumPy version 1.0.3.1 (or higher), and MatPlotLib.
import numpy, pylab
The numpy
and pylab
modules will provide functionality that you will often use.
Once you have loaded these, you can get on with the project of loading your data and putting it in useful form.
Reading and writing data in rectangular text form is nicely handled by
numpy.loadtxt
and numpy.savetxt
.
#load the raw data
raw_data = numpy.loadtxt(datafilepath)
#print your data (just as an error check)
print raw_data
#for convenience, put raw data in a rec_array
data = numpy.rec.fromrecords(raw_data, names=['date', 'unrate', 'pop', 'gdp96'], formats='i,f,i,f')
#now you can access your data by name
print data['pop']
You should always experiment with new commands in the interpreter,
but your final efforts should be written down as a program.
So that is what we do next.
plot
command.
#create figure to hold plot
fig1 = pylab.figure(1)
#construct the axes for your plot
fig1_ax = fig1.gca()
fig1_ax.plot(data['date'],data['unrate'])
#show the graph
pylab.show()
Note that you will have to close the figure before you can return to the interpreter prompt.
(Also, only use the show command once.)
You can just cut and past the code for this graph---along with the provided comments---into your program file.
However you need to read about the commands that you are using.
Pylab commands are available online:
concentrate of pylab methods
and axes methods.
I provide some initial comments,
but you should add more detailed comments reflecting this reading by giving full explanations of what you are doing.
Your comments should be much more complete than the sample comments I have provided:
be sure to give a full explanation of each command and each option that you use.
#Student: explain the next command and fix
u_ax.set_title("Descriptive Title")
#Student: explain the next command and fix
u_ax.set_xlabel("Descriptive Label")
#Student: explain the next command and fix
u_ax.set_ylabel("Descriptive Label")
#Student: explain the next command and fix
pylab.figtext(0.12,0.04,"Source: ...")
subplot
command)
or separately scale the left and right axes and plot one series against each axis (using the twinx
command).
Feel free to experiment with these in pursuit of a beautiful graph.
Whatever approach you take,
be sure that you include comments in your program file that explain what you are doing in detail.
show()
command,
which should occur at the end of your program.
Save and run your program one last time just to be sure it runs:
remember, programs that do not run will not be graded.
Call your program file lastname_uroot.py.
seed
function provided by this module.
To generate your standard normal variates,
call the normalvariate
function repeatedly (perhaps in a list comprehension).
seed
function.
To generate your standard normal variates,
call the standard_normal
function once with the proper shape.
(See the NumPy book or online documentation for details.)
from matplotlib import pyplot fig, ax = pyplot.subplots(1,1) ax.plot(wn1) plt.show()(Follow the assignment instructions.)
statsmodels
.)
statsmodels.tsa.stattools.adfuller
.
scatter
command
provided by Matplotlib.
statsmodels
provides regression routines.
To add a regression line to your figure: just plot two points.
(Hint: you may find the get_xlim
method of your figure axes to be useful.)
cumsum
function.
See the NumPy book or the Example List.
x = numpy.ones(20)
for i in range(19):
x[i+1] += x[i]
What is x after this code executes?
How can this observation help you with the final part of the assignment?
rols
method of the ls.OLS
class.
(Or use what you learned in econometrics to write your own!)
hpfilter
function in pytrix.timeseries.
Name your program lastname_dpuroot.py).
.py
files into a folder named pytrix
,
beneath your homework folder, and from pytrix.unitroot
import adf_ls
.
Or if you are willing to install SVN (recommended!), you can change to your homework
directory and then enter svn checkout http://econpy.googlecode.com/svn/trunk/pytrix pytrix
.)
Please create your pytrix
folder in one of two places:
either directly below your homework folder (which contains your program),
or in your Python site-packages
folder.
pytrix.unitroot.adf_ls
for a fixed lag or with pytrix.unitroot.adf
to look at many lags.
There are several "stacking" commands in NumPy, which allow you assemble arrays into larger arrays. You may find column_stack useful.
If you use pytrix.ls.OLS, it takes a single T×1 dependent variable and a conformable T×K array/matrix of indpendent variables. Remember that a constant will be added for you. Be sure to assign names to your variable, which will help when you are reading the output.
To create multiple plots in a single figure,
you may wish to use Pylab's subplot
command.
(It returns an axis instance, which you can use for plotting as usual.)
Place your data file in the same folder as your homework.
Access it with just its name (no path information).
If you work at the interpreter, you will have to os.chdir()
to this directory,
after you start the interpreter.
Use the CSV module
to read your data.
(Learn by trying the first example.)
Get the header line by using the next()
method of your reader.
Then iterate through the data to produce your lists of numbers to plot.
In these assignments you will use the EViews econometric package. I presume very little statistical background. Be sure to read my Introduction to EViews.
You can annotate your graphs by using the addtext command.
After you freeze a table of results,
you can add commentary with the setcell command.
Be sure to include comments in your program files.
(Ordinarily you will have long comments following any use of the show
command.)
With EViews it is easiest to add text to the top of a graphical figure.
Estimated time required for completion: 2-3 hours.
Most common errors:
failure to add comments fully describing each new EViews command or option,
and failure to determine the units of rgdp_pc.
Be sure the read the introductory material before attempting this assignment.
The primary purpose of this assignment is to introduce you to the EViews econometric package. There is an Eviews workfile for this assignment. Turn in your homework as a single program file by email.
open g:\macro1.wf1
open g:\macro1.wf1
'Author: Your Name
'Date: yyyy-mm-dd
save
command.
save temp
Add this code to your program, then save and run your program again.
This time you should see the new name (temp) in the title bar of your workfile window.
Now any changes we make will show up in temp.wf1 instead of in the workfile I sent you.
Make sure you add to your program file a comment carefully explaining this.
graph
and line
commands.
'construct the graph
graph unrate_f.line unrate
'show the graph
show unrate_f
You can just cut and past the code for this graph---along with the provided comments--- into your program file,
and then run your program again.
However you need to read about the EViews commands that you are using:
the EViews command reference should be available on your J: drive as a PDF file.
Add more detailed comments reflecting this reading by giving full explanations of what you are doing.
Your comments should be much more complete than the sample comments I have provided:
be sure to give a full explanation of each command and each option that you use.
Be sure to use such terms as "declare", "name", etc.
'Student: explain use of addtext command
unrate_f.addtext(0,-0.75) <Descriptive Title>
unrate_f.addtext(0,-0.5) Data Source: <...>
'Student: explain use of legend command
unrate_f.legend -display
You can put this information anywhere you wish on the graph,
but make the graph look as professional as possible.
(This often requires some experimentation.)
Unfortunately, EViews does not yet (version 4.0) offer programmatic control of the addtext font,
although this is available for the legend.
series
command.
Suppose we wish to construct real GDP per capita.
We will need two series: real GDP (gdp96) and and total population (pop).
Add to your program code that calculates real GDP and assign it to a new series,
which you should name rgdp_pc.
'declare a new series named rgdp_pc
series rgdp_pc
'assign values to rgdp_pc
'Student: explain this computation in detail
rgdp_pc=1000000*gdp96/pop
After you declare the series, you will see it listed in your workfile.
Be sure to include a comment explaining why we calculate real GDP per capita the way we did.
(Be very specific: why do we multiply by 1,000,000?
What are the units of rgdp_pc?
To answer this you need to examine the label view of gdp96 and pop.)
graph
and line
commands.
Once again, annotate and show your graph.
(As data source, say "Computed from" and list the sources for the raw series.)
Your program commments should include a brief explanation of what you have calculated and graphed.
line
command.
'Student: explain the 'x' option
graph uy_f.line(x) unrate rgdp_pc
'Student: explain the scale command
uy_f.scale(r) log
Be sure that you include comments in your intro.prg file that explain the use of all the options used in this code
(e.g., the x
option of the line
command).
These comments should be based on your reading in the EViews online Command Reference.
Add code to annotate and show your graph.
If you see any relationship between the two series,
add to your program file a comment describing that relationship.
show
commands at the end of your program.
These show
commands should be in reverse order, so that the graphs and tables show in the order they were created.1
Save and run the program one last time just to be sure it runs:
remember, programs that do not run will not be graded.
Estimated time required for completion: 4 hours.
Most common errors:
failure to add comments fully describing each new EViews command or option,
and failure to fully explain and correctly use the idea of a cumulative sum.
Less common errors include using rndseed twice and failure to relate the behavior of the regression residuals to the time series properties of the series.
Be sure the read the User Guide discussion of the Dickey-Fuller test before attempting this assignment.
For this exercise, you will write a program that produces beautiful, annotated graphs and tables. (Call your program file uroot.prg.) As always, your program file should include comments for each command line that introduces any new command or option. (By new, I mean that it is the first time that command or option is used in this program.) When the command on a program line sets optional values (e.g., lag length), be sure to comment on that fact for each of the options that are set. (Make sure you read about each new EViews command in the excellent online command reference. You will need to do this to appropriately comment each program line.) Also, make sure the first two lines are comments containing your name and the date of your program.
'Student: add detail to *each* comment
'declare a workfile named 'spurious'
workfile spurious u 1 300
'seed the random number generator
'Student: explain *why*!
rndseed 314
'Declare a series named wn1
series wn1
'Assign values to wn1
'Student: explain what kind of values!
wn1 = nrnd
'Student: explain displayname command
wn1.displayname() White Noise 1
'Student: explain graph and line commands
graph wn1_f.line wn1
'Student: explain addtext and (t)
wn1_f.addtext(t) A White Noise Process
(You can just cut and paste this code into your program file,
and then fix the comment lines.
I have added comments to the first few lines,
as examples of how you should comment each program line.)
Produce a beautiful annotated graph.
Add a longer comment to your program file that makes a few observations about what you learn from your graph (about the nature of white noise and about the standard normal distribution).
Note that once in your program you will use the rndseed command. This is just to make sure that everyone's program output looks the same. Imagine that you have a large book of random numbers that the whole class is using for an experiment. To make sure everyone should get the same result, we agree on a page and a line number in the book, and we read our numbers sequentially from that point. Think of the command rndseed as equivalent to specifying a page and line number in a book of random numbers.
'Student: explain command and options
wn1.uroot(adf,none)
We will first conduct an augmented Dickey-Fuller (ADF) test, using automatic lag length selection.
(We will discuss lag length selection later.)
We use the ADF test to determine whether we can reject the null hypothesis that the series has a unit root.
To conduct the test,
we compare the ADF test statistic with the critical values.
If the test statistic is close to zero,
then we cannot reject the null hypothesis.
In this case we accept that shocks are permanent and the series has a unit root.
If the test statistic is far enough from zero,
so that in absolute value it is larger than the critical values,
then we reject the null hypothesis.
Compare the ADF test statistic to the critical values.
What is the value of the ADF test statistic?
Can you reject the null hypothesis for this series at the 10% level?
Can you reject the null hypothesis for this series at the 1% level?
(Add detailed comment's to your program file explaining why or why not.)
Be sure to interpret the unit root test after carefully reading the EViews help for the uroot command,
including the discussion of Dickey-Fuller tests.
Follow the same procedure to create a second white noise process, and call it wn2. (Do not create a new workfile: you want all the series together in one workfile so that you can use them together. Do not reuse the rndseed command: that would make wn2 identical to wn1.) Now your workfile contains two series, wn1 and wn2, that represent two independent white noise processes.
'Student: explain command
group wng wn1 wn2
'Student: explain command
graph wn_scat.linefit wng
'Student: explain command
equation wn_reg.ls wn2 c wn1
'save regression residuals as wn_resids
wn_reg.makeresid wn_resids
'create a table named wn_tab of regression results
freeze(wn_tab) wn_reg.results
'add text to the table wn_tab
wn_tab(20,2) = "White Noise Regression Results"
show wn_tab
We will make a scatter plot of the two series and look at the regression line.
Copy the code into your program file,
add comments for every line of new code,
and discuss the results that are produced.
(What do you find?
Annotate your table of regression results with brief comments.
Pay special attention to the p-value for the coefficient on wn2:
it must be very small for us to reject the null hypothesis that the series are unrelated.
As always, put longer comments in the program file.)
stationary: innovations are transitory. In contrast, innovations to a random walk are permanent: it is non-stationary. We can generate a random walk by creating the cumulative sum of a series of white noise shocks.
'declare a series named csum
series csum
'assign 1 to all 300 elements
csum=1
'change sample to drop first obs
'Student: explain why this is necessary
smpl @first+1 @last
'create cumulative sum
csum=csum+csum(-1)
'restore full sample
smpl @all
We can simply produce a cumulative sum in EViews,
since series are sequentially updated element-by-element
(i.e., when it updates the second element, the first has already been updated).
As an illustration,
we will generate the numbers from 1 to 300 as the cumulative sum of a series whose every element is 1.
I am providing you with the code to do this,
but in order to do your assignment you need to understand how this code works.
E.g., what happens if we do not alter the sample before creating our cumulative sum, and why?
(Include an explanatory comment in your program file.)
Note that I have included brief program comments
(as examples of what you should be doing in your .prg file).
Note how every command line is commented.
For every program line that that has a new command or option,
make sure you are adding comments that
fully explain the purpose of that program line.
1 | 1 | 1 | 1 | 1 | … |
1 | 1 | 1 | 1 | 1 | … |
1 | 2 | 3 | 4 | 5 | … |
1 | 2 | 3 | 4 | 5 | … |
Declare a new series named rw1 and assign the values in wn1 to it. The makes rw1 a white noise process with the same values as wn1. Change rw1 to a random walk process by forming its cumulative sum. (Do not forget to change the sample as discussed above.)
Now go through the same process we followed when first looking at wn1. That is, graph your random walk series and conduct an ADF test on it. Compare with your white noise series: and comments to your program file summarizing your observations.
If we independently generate a second random walk process, we of course expect it to be completely unrelated to the first. To see if this is right, construct a second random walk series from wn2. (Just follow the same procedure you used to construct rw1 from wn1.) Name your new random walk series rw2. Examine the two random-walk series just like you examined the white-noise series. (I.e., look at the scatter plot and the regression results, and include detailed discussion in your program comments.) Store the regression residuals as rw_resids.
Granger and Newbold (1974 JTrix) showed that unrelated random walks appear related about 75% of the time.
Phillips (1986) showed that the larger your sample,
the more likely you are to reject unrelatedness!
However, take a look at the two series of regression residuals you stored,
and see if you can discover any clues to spurious regression in these.
Produce commented a commented line graph of your two series of regression residuals
(which you should have named wn_resids and for rw_resids above).
Use the m
option of the line
command,
and name this graph resids_f.
Include further comments in your program file that summarize your observations on the differences between the two series of residuals.
(They should be very different!)
Finally we look at highly autoregressive series.
Using your wn1
and wn2
series,
produce two new series (call them ar1 and ar2), each described by
xt=0.98 xt-1 + ut
where ut is taken from the white noise series.
(You must construct these in almost the same way you constructed the two random walk series:
as a cumulative sum, but with the weight 0.98 instead of 1.0 on the lag.)
Repeat the rest of our exploration once again,
with your new autoregressive series.
That is, graph your first autoregressive series and conduct an ADF test on it.
Look at the scatter plot of ar2 against ar1 and report the regression results,
Examine the regression residuals.
Make summary observations as program comments.
Turn in your .prg file, which should include an explanatory comment for each program line that contains any new procedure or option. As always, you may talk about the assignment with your colleagues, but you must write your own .prg file. Finally, do not forget to include a collection of show commands at the end of your program (in reverse order, so that the graphs and tables show in the order they were created).1
The unit roots assignment introduces you to some of the properties of unit root processes. In this exercise, we apply this to long-run inflation data for the US. The first thing you will need to do is get some CPI data, We will use a long-run CPI series from the Federal Reserve Bank of Minneapolis.
Your first task is to get CPI data from the Minneapolis Fed. Download the long-run US price data from 1800. (If this has disappeared, you may download the long-run US price data from 1913.) Import the CPI data into EViews. (See my discussion of importing data.) Add a comment to your program file summarizing how you got the data into a workfile.
I now assume that you have created an EViews workfile containing a CPI series named cpi,
which you have carefully checked visually.
You need to arrange to have access to the series from the program file you will write
(which you should name dpuroot.hw.prg).
There are several alternative approaches.
i. If you have saved the data as text or as a spreadsheet file,
you can simply include the appropriate read command in your program file.
ii. If you used another method,
you can save your workfile as g:\uscpiraw.wf1 and have your program begin by loading this workfile.
(Do not forget to immediately rename it so that you do not alter your raw data!)
iii. For this homework, I would like you to take a third approach:
save the CPI series as an individual database file.
To do this, give the command store(i) g:\cpi
.
That stores the data on disk.
Your program file should create a new workfile and then import the stored data with the command
fetch(i) g:\cpi
.
(Note that your final program file will *not* include save
or store
commands.)
Your EViews program file should then produce the following computations, graphs, and tables.
dlog()
function computes the difference of the logs of the annual CPIs,
so the computation can be represented as
series d_lp = dlog(cpi)*100
(Generate this yourself, using the change in the log of CPI,
rather than relying on the Fed's calculations.
Include a program comment explaining why this use of logarithms correctly computes annual inflation rates.)
Use Eviews to plot the inflation rate over time.
Make sure you produce a professional looking line graph,
using the addtext
and displayname
commands as in past exercises.
Sargent (1971 JMCB) argued that U.S. inflation experience suggested the U.S. inflation rate is stationary (i.e., does not contain a unit root).
Visually inspect your graph and add a comment (to your program) explaining what you think of his assessment of the stationarity of the inflation rate.
Note: you need not read the Sargent article for this exercise.
Use an augmented Dickey-Fuller test to check your visual assessment of the stationarity of the inflation rate.
You may do a preliminary "point-and-click" assessment to choose the parameters for the uroot
command;
just summarize that assessment in a program comment.
Be sure to freeze your unit root results into a table,
add appropriate labeling, and show your table.
Provide a final long comment explaining how to interpret your results. (Are shocks to the inflation rate temporary or permanent? What might explain this?)
Might the "permanent" abandonment of the gold standard in 1973 have changed the fundamental behavior of inflation?
Break the sample in two and reconsider the stationarity of inflation over each subsample.
(A nice touch would be to use sample
objects.)
Turn in your your .prg file. (As usual, make sure your .prg file contains all of your comments and shows all your graphs and tables, fully annotated.)
Estimated time required for completion: 2-3 hours.
Most common errors:
failure to use the correct formula for the primary deficit,
failure to attend carefully to units,
and failure to fully explain calculations, programming techniques, and significant results.
(Note: the simulated series should closely track the actual series.)
EViews updates its series sequentially, so it allows convenient representations of difference equations. Our first exercise will be to create three different series that illustrate the issue of stationarity. Open a new program file and call it deficits.hw.prg. We need a workfile to add these series too, so start by declaring an undated workfile of 50 observations named temp1.
scalar rho
rho=0.95
series y095
y095(1)=15
smpl @first+1 @last
y095=rho*y095(-1)
smpl @all
graph rho095.line y095
show rho095
We begin by declaring a series named y095.
Then we assign an aribtrary starting value of 15 to the series.
Then we fill in the other values based on the difference equation
yt=ρyt-1.
This is easy to do because EViews sequentially updates the elements of the series.
You just need to remember to change the sample so the updating can take place appropriately.
Use the same process to create graphs called rho100 and rho105, which set the value of rho to 1.00 and 1.05 respectively.
Finally, merge your three graphs into a single graph object named allrho by using the command
graph allrho.merge rho095 rho100 rho105
As always, be sure to fully comment your program,
so that all new commands are explained in detail.
Be sure to show
your graphs and comment in detail on what you learn from them.
Try to include both general and specific comments:
comment on what you have learned about how the behavior of a series over time depends on its autoregressive coefficient,
and apply this to what you know about debt dynamics.
Once you are satisfied with this first part of the assignment,
you can close
your workfile and proceed to the next part.
(So you will turn in a single program file for the entire assignment.)
series sim=debty
'create four sample objects
sample postwar 1946 1980
sample rb 1981 1992
sample clinton 1993 2000
sample bush2 2001 2002
'declare scalars
scalar dpy 'average deficit
scalar i 'average i rate
scalar n 'average gdp growth
'loop across samples
'(to comment, read pp.100-101!)
for %s postwar rb clinton bush2
smpl {%s}
dpy=@mean(defpy)
n=@mean(gyn)
i=@mean(tb3)
sim=(1+i)*sim(-1)/(1+n) + dpy
next
smpl @all
Now that you have learned more about difference equations,
we are going to reconsider the debt dynamics across different regimes.
Download the workfile for this exercise.
I will assume it is g:\macro1.wf1.
Add code to your program file to
open this workfile and then save it as g:\temp.wf1.
(Question: why do we save this under a different name?
Answer: we should never change the file that contains our raw data.)
We will need six series from this workfile:
tb3, gdp, fyonet, fyfr, fyoint, and fygfd.
As usual, the label view of these series will inform you of their contents.
(Pay attention to the units!)
You will need to add program code to update some of these series to the end of last year with data from the latest Economic Report of the President.
(Note that if you set the sample to a single year,
then assignments of values to series only affect that year.)
Use the series in your workfile to create the following new series:
defpy (the primary deficit as a proportion of gdp),
debty (federal debt as a fraction of gdp),
and gyn (the annual growth rate of gdp, dlog(gdp)
).
Once you do this you are ready to simulate the debt dynamics for various historical periods.
Use the code I am providing to do this.
(Your job is to add thorough comments to make it clear you understand what is going on.
Make sure you read pp.100-101 and pp.431-432 of the Command Reference!)
Finally create a group named simg containing debty and sim,
and make a line graph showing the behavior of the two variables.
Take a few minutes to add comments on what you learn from all this.
This project is an exploration of very simple formulations of Okun's Law. You will use many EViews commands that you have learned already, along with a few new ones. As always, I want you to add comments to your program file for each command you use. The comments should show that you understand what the command is doing, and you should pay special attention to explaining any arguments (including optional arguments) that are used. After you have explained the use of a command or a command option one time in detail, you can offer very short comments should it be used again.
series y=log(1000000000*gdp96)
y.displayname Log of Real GDP
series u=unrate/100
u.displayname Unemployment Rate
We declare the series y and assign to it the log of real gdp.
We the declare the series u and assign to it the unemployment rate.
(Make sure you explain the calculations.)
equation y_e.ls() y c @trend
freeze(y_t1) y_e.results
show y_t1
y_e.fit(f=na) y_lt
y_lt.displayname() Real GPD: Linear Trend
y_e.makeresid() y_lc
y_lc.displayname() Real GPD: Linear Cycle
graph y_f.line() y y_lt
y_f.addtext(t) Real GDP and It´s Trend
show y_f
'EV3: y_f.legend(s)
y_f.legend() columns(1)
We regress y on a constant and a trend.
The average annual percentage growth rate of real GDP over the period is given by the estimated slope in this regression.
(Add a note in your table stating this.
In your program file, add a comment explaining why it is determined as an annual percentage growth rate)
Next we construct a linear trend and linear cycle for y,
and we graph y and its linear trend.
show
command so that the graph shows on the screen when I run your program.
Examine this graph and include comments in your program file.
equation okun_e1.ls y_lc c u
freeze(okun_e1_t) okun_e1.results
show okun_e1_t
What is your estimated ``Okun coefficient''?
freeze(okun_f1) okun_e1.rls(c) c(2)
okun_f1.addtext(t) Okun Coef: Constant Un
show okun_f1
We examine this with recursive least squares.
Create a graph of the recursive coefficient estimates of the Okun coefficient,
and comment on its stability or instability.
(Do not forget to include comments in your program file that show you have read about the rls
command in the Command Reference and understand its use.)
hpf y yn
yn.displayname() Real GDP: Trend
series y_fc=y-yn
y_fc.displayname() Real GDP: Cycle
graph y_hp.line y yn
y_hp.addtext(t) Real GDP: Flexible Trend
show y_hp
I have included code to do this for real GDP.
Do the same thing for u,
except you should also include the flexible cycle in the graph.
Call your graph un_f.
Does it seem to give a more reasonable characterization of the
behavior of the natural rate of unemployment over time?
This project is an exploration of very simple Phillips curve formulations.
As usual, email fully commented .prg files, making sure your program adds all your commentary to your graphs (using addtext) and tables (using setcell).
All graphs and tables should be labeled nicely with your name, the date, the sample used, and an explanatory title.
Relevant commentary should be included as comments in your program file.
Comment:
Each addtext command will produce one line of text:
sadly, there is no end-of-line escape character in EViews 3.
(There is in EV4.)
So if you wish to add extensive text,
you need to break your text into lines and use a separate addtext command for each one.
If you have an extensive commentary,
put it in your .prg file as comments.
(You may note on your graph that you have done so.)
Note that you can use negative numbers when positioning your text.
'create a new monthly workfile with appropriate sample
workfile temp m 1947.01 2005.09
'read cpi data
read(t=dat,rect,skiprow=5,name,label=2,d=s,mult) d:\data\fred\cpi\cpiaucsl 2
The first line is a comment, since it starts with an apostrophe.
The second line creates a new monthly workfile for the period my data covers.
The third line is a comment on the fourth line,
which reads in the cpi data.
The read command has a lot of options since data file formats can vary.
[In my file cpiaucsl, I find five lines for notes,
followed by a line with the series names (date cpiaucsl unrate),
followed by a blank line,
followed by the date and price data in columns.
Make sure you adjust the sample (using smpl) before reading in UNRATE.]
Note: If you wish, after you read in your data,
you can save your workfile and commment out the lines that read in the data.
The rest of the .prg file will work with the series you have created in your workfile.
Hint: Always read in a date variable as well,
and check your data by examining it.
So that we will share nomenclature, rename the series as u and p and save the workfile as a:\temp.wf1. Make sure the .prg file you submit begins by opening a:\temp.wk1, which should contain only the variables u and p (and, if you wish, your date variable).
series lp = log(p)
series dlp = lp-lp(-1)
graph monthly1.scat(r) u dlp
monthly1.addtext(0,-.5) "Monthly inflation and unemployment"
group u_dlp u dlp
'calculate the (scalar) natural rate
scalar u_nat = -c(1)/c(2)
'create a group for Granger test
group u_dlp u dlp
'run the Granger causality test with 18 lags
u_dlp.cause(18)
(Oddly, if you want to see your output, you must
proceed as in this example rather than applying
cause directly to the series.)
Optional exercise: Try to reproduce Figure 1 in
Staiger, Stock, and Watson (1997 JEP).
Calculate the associated natural rate.
Note: Attempting exact replication can be very frustrating:
I want you to do your best to think about the differences between your data and theirs,
but I'm not expecting an exact match.
The easiest way to transform the frequency of the data is to
pick File, New, Database to create a new
Eviews database, and then store your series in it.
Then pick File, New, Database to create a new
workfile at annual frequency.
If you stored price and unemployment data as p and u,
then you might fetch the data from your database using
fetch(c=l) p
fetch(c=a) u
which takes the end of period price level and the average
unemployment rate. (See the Help on frequency conversion.)
Note: You may prefer to save your series as individual .db files (using store(i) so that you can use the fetch(i) command instead.
(You can read about fetch and store in the online command reference.)
The Permanent Income Hypothesis is often tested by testing for excess sensitivity.
Null Hypothesis:
The change in consumption is not sensitive to predictable changes in current disposable income.
equation keynes.ls rcc c ryd
freeze(keqn) keynes.results
setcell(keqn,20,1,"your name")
(See the online command reference for the equation and freeze commands.)
graph c_ryd.scat ryd rcc
show c_ryd
c_ryd.option(r,0)
The first line creates a scatter plot of consumption, rcc, versus disposable income, ryd,
and names the graph c_ryd.
(You will see this name appear in your workfile.)
The second line opens a window displaying your graph.
The third line adjusts the graph options so that your
graph is more easily comparable to your previous work.
(See option in the online Command Reference for more details.)
As always, include addtext commands in your .prg file to annotate your graph.
(If you have extensive comments, leave these as comments in your .prg file.)
Turn in an Eviews program file that generates all of your results, with all your tables and graphs appropriately labeled and commented.
(Don't forget to freeze regression results to produce a table to which you can add text.)
Limit the amount of text on a graph so that you can make it look professional, like something you would include in a presentation.
Additional written analysis may be included as comments in your program file.
You are encouraged to talk with each other about the assignment, but each student must produce their
own program file and their own analyses.
Replicate Table I and Table II in Mankiw, Romer, and Weil (1992 QJE).
For any theory exercise, your write-up should be neatly written and easy to follow. You are encouraged to use Scientific Notebook for your write up; it is available as an EagleNet application.
Consider the comparative statics of the Tobin (1969 JMCB) 3-asset model. Produce the supporting algebra for an open market purchase and for a ``helicopter drop'' increase in the money supply. Make sure you give a detailed argument to determine the signs of the partial derivatives or your reduced form functions. (Before you start, be sure to determine what is endogenous and what part of the model structure you need to work with.)
Tobin (1969 JMCB) derives the slopes of the LL, kk, and bb curves intuitively. Make sure you can give a detailed exposition of this intuition. Provide an algebraic derivation of each slope.
Provide graphs and intuition for an increase in ``animal spirits'' in the Friedman model (100% money finance). Do the long-run comparative statics algebra as well. What do you think of Friedman's claim that his proposal will stabilize an economy that is subject to aggregate demand shocks?
Provide graphs and intuition for an open market purchase in the 100% bond finance model presented in class. Add the long-run comparative statics algebra as well.
1. Do the long-run comparative statics algebra
for a) an increase in M,
and b) an increase in G
in the term structure model. Make sure your
algebra is detailed and neatly presented.
Be sure to discuss what you are doing and
why. Add supporting graphs and intutition.
2. In the term structure model, consider a
one-time, permanent, unanticipated increase
in M. Give a detailed description of the
behavior of the short-rate over time.
Be sure to explain why it is incoherent to
expect a jump in interest rates in this
model.
Consider the differential equation system
M=L(if+DE/E,Y)
DY=f(Y,E,F)
with the partial derivatives defined in class.
Find a solution using the adjoint matrix technique,
under the assumption that Y is predetermined.
1. Given the classical model Y=A(r,Y,F) m=L(r+pi,Y) endog: r,m a. find signs of the partial derivatives of the reduced forms for r and m using standard comparative statics algebra. (The soln to this is in your class notes.) b. Give a graphical and intuitive proof that your algebraic results are correct. Intuition should be *very* detailed. Explain why each curve has the slope it has. Explain why each curve shifts the way it does. You explanations should be simple and clear enough to be understood by an average student in an undergraduate macroeconomics class.
For those of you using Vim
(a truly wonderful editor, btw),
here is a hint for accumulating
all your show commands in the right order.
Once you have written your program,
including all your show commands each time
you produce a new graph or table,
issue the editor command
:g/^show/t 0
Then move the copied show commands (which
you will find at the top of your file,
in reverse order as desired)
to the bottom of the file.