Search the Community
Showing results for tags 'chatgpt'.
-
Anyone tried it out? I saw a YouTube it can even solve excel formulation, essays...etc. https://www.bbc.com/news/technology-64538604 Quoted "It has been two months since the public launch of AI chatbot ChatGPT by the firm OpenAI - and it did not take long for people to start noticing what a game-changer this really is. Whether you have asked it to write you a song in the style of your favourite musician, sneaked in a homework question (500 words on the end of World War Two? no problem), tasked it to write copy for your company website, write a speech or even churn out specific program code, ChatGPT has proved that it can deliver - and in a convincing way. There has been acres of reporting about its potential threat to a wide range of jobs, and indeed to our entire model of education if students can get their coursework done and university applications written instantly via ChatGPT or its rivals. " Unquoted
-
https://sg.style.yahoo.com/quit-teaching-because-chatgpt-173713528.html I Quit Teaching Because of ChatGPT This fall is the first in nearly 20 years that I am not returning to the classroom. For most of my career, I taught writing, literature, and language, primarily to university students. I quit, in large part, because of large language models (LLMs) like ChatGPT. Virtually all experienced scholars know that writing, as historian Lynn Hunt has argued, is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort. In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles. As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it. In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text. My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of “duplicative language” are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing. Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research. Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is “fix the grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers. The problem was not recognizing AI-generated or AI-revised text. At the start of every semester, I had students write in class. With that baseline sample as a point of comparison, it was easy for me to distinguish between my students’ writing and text generated by ChatGPT. I am also familiar with AI detectors, which purport to indicate whether something has been generated by AI. These detectors, however, are faulty. AI-assisted writing is easy to identify but hard to prove. As a result, I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”). That is, I found myself spending more time giving feedback to AI than to my students. So I quit. The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies. However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing.
-
https://vulcanpost.com/843379/team-of-ai-bots-develops-software-in-7-minutes-instead-of-4-weeks/ Back in July, a team of researchers proved that ChatGPT is able to design a simple, producible microchip from scratch in under 100 minutes, following human instructions provided in plain English. Last month, another group — working at universities in China and the US — decided to take a step further and cut the humans out of the creative process almost completely. Instead of relying on a single chatbot providing answers to questions asked by a human, they created a team of ChatGPT 3.5-powered bots, each assuming a different role in a software agency: CEO, CTO, CPO, programmer, code reviewer, code tester, and graphics designer. Each one was briefed about its role and provided details about their behaviour and requirements for communication with other participants, e.g. “designated task and roles, communication protocols, termination criteria, and constraints.” Other than that, however, ChatDev’s — as the company was named — artificial intelligence (AI) team would have to come up with its own solutions, decide which languages to use, design the interface, test the output, and provide corrections if needed. Once ready, the researchers then fed their virtual team with specific software development tasks and measured how it would perform both on accuracy and time required to complete each of them. The dream CEO The bots were to follow an established waterfall development model, with tasks broken up between designing, coding, testing, and documenting of work done, with each of them assigned their roles throughout the process. What I found particularly interesting is the exclusion of CEO from the technical aspects of the process. His role is to provide the initial input and return for the summary, while leaving techies and designers to do their jobs in peace — quite unlike in the real world! I think many people would welcome our new overlords, who are instructed not to interfere with the job until it’s really time for them to. Just think how many conflicts could be avoided! Once the entire team was ready to go, the researchers then fed their virtual team with specific software development tasks and measured how it would perform both on accuracy and time required to complete each of them. Here’s an example of fully artificial conversation between all of the “members”: Later, followed by i.a. this exchange between the CTO and the programmer: These conversations continued at each stage before its completion and information being passed for interface design, testing, and documentation (like creating a user manual). Time is money After running 70 different tasks through this virtual AI software dev company, over 86 per cent of the produced code was executed flawlessly. The remaining about 14 per cent faced hiccups due to broken external dependencies and limitations of ChatGPT’s API — so, it was not a flaw of the methodology itself. The longest time it took to complete a single task was measured at 1030 seconds, so a little over 17 minutes — with an average of just six minutes and 49 seconds across all tasks. This, perhaps, is not all that telling yet. After all, there are many tasks, big and small, in software development, so the researchers put their findings in context: “On average, the development of small-sized software and interfaces using CHATDEV took 409.84 seconds, less than seven minutes. In comparison, traditional custom software development cycles, even within agile software development methods, typically require two to four weeks, or even several months per cycle.” At the very least, then, this approach could shave off weeks of typical development time — and we are only at the very beginning of the revolution, with still not very sophisticated AI bots (and this wasn’t even the latest version of ChatGPT). And if time wasn’t enough of a saving, the basic costs of running each cycle with AI is just… $1. A dollar. Even if we factor in the necessary setup and input information provided by humans, this approach still provides an opportunity for massive savings. Goodbye programmers? Perhaps soon, but not yet. Even the authors of the paper admit that even though the output produced by the bots was most often functional, it wasn’t always exactly what was expected (though it happens to humans too — just think of all the times you did exactly what the client asked and they were still furious). They also recognised that AI itself may exhibit certain biases, and different settings it was deployed with were able to dramatically change output, in extreme cases rendering it unusable. In other words, setting the bots up correctly is a prerequisite to success. At least today. So, for the time being, I think we’re going to see a rapid rise in human-AI cooperation rather than outright replacement. However, it’s also difficult to escape the impression that through it we will be raising our successors and, in not so distant future, humans will be limited to only setting goals for AI to accomplish, while mastering programming languages will be akin to learning Latin.
- 20 replies
-
- 3
-
-
-
- programmers
- ia bots
- (and 9 more)