In a recent op-ed, which appeared in the Globe and Mail, we explained that Canada’s expected poor economic growth compared to other industrialized countries could lead to longer workweeks for Canadian workers if we’re to keep pace with living standards in other countries.
The op-ed generated discussion online including from a writer who, interestingly, relied entirely on artificial intelligence (AI), specifically ChatGPT, one of the better-known AI facilities. The writer asked ChatGPT three questions and then appears to have simply copied and pasted its response as a letter to us and the Globe and Mail. There are several interesting and indeed insightful aspects about the response. Let’s first start with several major mistakes made by ChatGPT.
First, ChatGPT indicated the op-ed presented a “false dichotomy” between “reducing the workweek and maintaining or increasing material living standards.” This is simply incorrect, as any basic reading of the op-ed shows. We specifically explained how increased rates of economic growth, and more specifically increases in worker productivity, allowed workers to both increase their living standards while simultaneously reducing the hours of work in a week.
The problem, as we explained, is that Canada is expected to have the lowest rate of per-worker economic growth (GDP) of any industrialized country. Thus, to keep pace with other countries recording higher rates of economic growth, Canadian workers face a lose-lose decision—either work more hours to keep pace with living standards in those countries or maintain their workweeks but experience a comparative reduction in living standards.
The response from ChatGPT also criticized the piece for “oversimplifying” the linkage between government policies and the projected decline in living standards. There are actually two errors in this statement. First, apparently ChatGPT did not review the study cited and linked to in the piece, since that study includes explanatory factors such as labour efficiency gains, capital investment and labour market changes. Moreover, ChatGPT seems unable to distinguish between an absolute decline in living standards, which is not part of our argument, and a relative decline in living standards, which is the core of the argument we present.
ChatGPT also seems to ignore or perhaps simply be unaware of the word count limitations in an op-ed, as at least two of the criticisms essentially demand additional studies, data and other potential explanations. In other words, it’s impossible to address all of ChatGPT’s inquiries in a 700-word op-ed.
In addition to the multiple mistakes in ChatGPT’s analysis, there’s the more worrying approach of the writer who’s obviously interested and engaged. The problem is that the person seems to have blindly and without independent thought accepted ChatGPT’s output as accurate, thoughtful and helpful. Had the writer simply compared what ChatGPT produced versus the actual text of the op-ed, he likely would have realized some of the errors. This blind acceptance of ChatGPT as an authority, like many accepted Google outputs, poses a real problem for public dialogue and debate. While both mechanisms ChatGPT and Google can be helpful when gathering information, neither should be accepted uncritically and without review.
A second interesting insight from the response is the old technology adage of “garbage in, garbage out.” Simply put, if a user asks ChatGPT or any search engine or other AI facility a mistaken question, there’s a pretty high likelihood, if not a certainty, that the technology will produce a mistaken answer. For example, the writer asked ChatGPT “what are some perverse or biased reasons the fraser institute may advocate for increasing the hours employees work to address low productivity instead of using other methods.”
As we explained above, and certainly within the op-ed, we did not “advocate” for increasing work hours. Indeed, the whole point of our piece was to explain the costs of low growth in productivity compared to other industrialized countries, and that by improving government policies we might avoid these results.
New technologies can be immensely valuable additions to our work, our understanding of the world around us, and information-gathering more generally, but they must be used properly—that is, as a resource in making decisions rather than making decisions for us. By blindly accepting answers from AI facilities such as ChatGPT without any scrutiny or review, people—including those genuinely interested in public policy debate—risk making major mistakes and misunderstanding the issues, as evidenced by our recent experience.
Commentary
Technology is largely neutral, but users are not
EST. READ TIME 4 MIN.Share this:
Facebook
Twitter / X
Linkedin
In a recent op-ed, which appeared in the Globe and Mail, we explained that Canada’s expected poor economic growth compared to other industrialized countries could lead to longer workweeks for Canadian workers if we’re to keep pace with living standards in other countries.
The op-ed generated discussion online including from a writer who, interestingly, relied entirely on artificial intelligence (AI), specifically ChatGPT, one of the better-known AI facilities. The writer asked ChatGPT three questions and then appears to have simply copied and pasted its response as a letter to us and the Globe and Mail. There are several interesting and indeed insightful aspects about the response. Let’s first start with several major mistakes made by ChatGPT.
First, ChatGPT indicated the op-ed presented a “false dichotomy” between “reducing the workweek and maintaining or increasing material living standards.” This is simply incorrect, as any basic reading of the op-ed shows. We specifically explained how increased rates of economic growth, and more specifically increases in worker productivity, allowed workers to both increase their living standards while simultaneously reducing the hours of work in a week.
The problem, as we explained, is that Canada is expected to have the lowest rate of per-worker economic growth (GDP) of any industrialized country. Thus, to keep pace with other countries recording higher rates of economic growth, Canadian workers face a lose-lose decision—either work more hours to keep pace with living standards in those countries or maintain their workweeks but experience a comparative reduction in living standards.
The response from ChatGPT also criticized the piece for “oversimplifying” the linkage between government policies and the projected decline in living standards. There are actually two errors in this statement. First, apparently ChatGPT did not review the study cited and linked to in the piece, since that study includes explanatory factors such as labour efficiency gains, capital investment and labour market changes. Moreover, ChatGPT seems unable to distinguish between an absolute decline in living standards, which is not part of our argument, and a relative decline in living standards, which is the core of the argument we present.
ChatGPT also seems to ignore or perhaps simply be unaware of the word count limitations in an op-ed, as at least two of the criticisms essentially demand additional studies, data and other potential explanations. In other words, it’s impossible to address all of ChatGPT’s inquiries in a 700-word op-ed.
In addition to the multiple mistakes in ChatGPT’s analysis, there’s the more worrying approach of the writer who’s obviously interested and engaged. The problem is that the person seems to have blindly and without independent thought accepted ChatGPT’s output as accurate, thoughtful and helpful. Had the writer simply compared what ChatGPT produced versus the actual text of the op-ed, he likely would have realized some of the errors. This blind acceptance of ChatGPT as an authority, like many accepted Google outputs, poses a real problem for public dialogue and debate. While both mechanisms ChatGPT and Google can be helpful when gathering information, neither should be accepted uncritically and without review.
A second interesting insight from the response is the old technology adage of “garbage in, garbage out.” Simply put, if a user asks ChatGPT or any search engine or other AI facility a mistaken question, there’s a pretty high likelihood, if not a certainty, that the technology will produce a mistaken answer. For example, the writer asked ChatGPT “what are some perverse or biased reasons the fraser institute may advocate for increasing the hours employees work to address low productivity instead of using other methods.”
As we explained above, and certainly within the op-ed, we did not “advocate” for increasing work hours. Indeed, the whole point of our piece was to explain the costs of low growth in productivity compared to other industrialized countries, and that by improving government policies we might avoid these results.
New technologies can be immensely valuable additions to our work, our understanding of the world around us, and information-gathering more generally, but they must be used properly—that is, as a resource in making decisions rather than making decisions for us. By blindly accepting answers from AI facilities such as ChatGPT without any scrutiny or review, people—including those genuinely interested in public policy debate—risk making major mistakes and misunderstanding the issues, as evidenced by our recent experience.
Share this:
Facebook
Twitter / X
Linkedin
Jason Clemens
Executive Vice President, Fraser Institute
Steven Globerman
Senior Fellow and Addington Chair in Measurement, Fraser Institute
STAY UP TO DATE
More on this topic
Related Articles
By: Jock Finlayson
By: Ben Eisen and Milagros Palacios
By: Tegan Hill and Alex Whalen
By: Philip Cross
STAY UP TO DATE