ETEC540 Task 7: Mode-bending
I use generative AI tool to create a video with audio support for picture description by typing in the list of objects in my bag without giving further details. Within 60 seconds, the tool generates a 44 seconds video just after a click.
https://ai.invideo.io/watch/Xs_VBgx2Xnu
With the support of dynamic narrative, combining visual, sound and animation, the process of transmedia vividly conveys the idea and automatically delivers the audience the message of "what's in my bag". It is obvious that videos can capture attention more than pictures and hold viewers' interest longer, which is beneficial for marketing and educational content. Videos can present more information in a shorter time frame through visuals, audio, and text, making complex ideas easier to understand compared with static photo.
This echoes the New London Group's idea who emphasizes the importance of multiliteracies in education. Literacy extends beyond traditional reading and writing. It encompasses various forms of communication, including visual, digital, and cultural literacies, recognizing the diverse ways people express and interpret meaning. Without releasing too many details and background information of the picture as well as the purpose of the video, the autogenerated video brightens up my eyes at first sight yet if I produced the video myself, the output would be quite different. This is because the video does not account for cultural and social background factors of the inputter so it is generated from a westerner's view - at his best. This echoes New London Group's advocacy for understanding literacy in context, we need to consider social, cultural, and technological factors that influence how people communicate and learn. Designing learning experiences shall reflect the complexities of modern communication, empower students to navigate and create meaning in multiple modes.
Comments
Post a Comment