Diana thank you for this insightful and entertaining read (and for the mention of "She's a Beast" - I lift weights too, never dieted in my life!). Being a literary author I use metaphors all the time in my client work and writing... I love them, they're in my DNA.
One thing I would question though, and that is the suggestion to use chatbots to learn. According to the dual-coding concept, which states that you should ideally be drawing or writing down the thing you're trying to learn, or listening to music while writing, it would seem that interacting on a screen with a chatbot instead of writing things down or drawing/doodling their interconnections, is not the best way to learn. Not to mention how prone to errors and misrepresentations chatbots are, even with info fed directly to them, and how much energy and water they use (much more than traditional online searches). IMHO using an LLM to scan a long, dense document to pick out a few bullet points you need is one thing—bc you can check those yourself—but using it for real review and analysis is questionable. Besides, shouldn't you be using your own brain to strength-train on that information rather than outsource it to an algorithm? Isn't that like bringing your assistant to the gym to strength train on your behalf?
I hear your concerns here, but in my experience LLMs have been incredible for building out mind maps of *what I don’t know*. Ask any of them to quiz you on your understanding of the big concepts in XYZ (I recently did the cell cycle with great results), and you’ll quickly learn where you need to focus your time.
Right, that is in essence what I said—depending on the LLM of course. I'm all for saving time strategically. We cannot compare a LLM to a human expert tho, or even better, a group of human experts (say, the CD community) because learning isn't merely parsing and recomposing data and tokens.
Diana thank you for this insightful and entertaining read (and for the mention of "She's a Beast" - I lift weights too, never dieted in my life!). Being a literary author I use metaphors all the time in my client work and writing... I love them, they're in my DNA.
One thing I would question though, and that is the suggestion to use chatbots to learn. According to the dual-coding concept, which states that you should ideally be drawing or writing down the thing you're trying to learn, or listening to music while writing, it would seem that interacting on a screen with a chatbot instead of writing things down or drawing/doodling their interconnections, is not the best way to learn. Not to mention how prone to errors and misrepresentations chatbots are, even with info fed directly to them, and how much energy and water they use (much more than traditional online searches). IMHO using an LLM to scan a long, dense document to pick out a few bullet points you need is one thing—bc you can check those yourself—but using it for real review and analysis is questionable. Besides, shouldn't you be using your own brain to strength-train on that information rather than outsource it to an algorithm? Isn't that like bringing your assistant to the gym to strength train on your behalf?
I hear your concerns here, but in my experience LLMs have been incredible for building out mind maps of *what I don’t know*. Ask any of them to quiz you on your understanding of the big concepts in XYZ (I recently did the cell cycle with great results), and you’ll quickly learn where you need to focus your time.
Right, that is in essence what I said—depending on the LLM of course. I'm all for saving time strategically. We cannot compare a LLM to a human expert tho, or even better, a group of human experts (say, the CD community) because learning isn't merely parsing and recomposing data and tokens.