ETEC540 Task 7: Mode-bending I use generative AI tool to create a video with audio support for picture description by typing in the list of objects in my bag without giving further details. Within 60 seconds, the tool generates a 44 seconds video just after a click. https://ai.invideo.io/watch/Xs_VBgx2Xnu With the support of dynamic narrative, combining visual, sound and animation, the process of transmedia vividly conveys the idea and automatically delivers the audience the message of "what's in my bag". It is obvious that v ideos can capture attention more than pictures and hold viewers' interest longer, which is beneficial for marketing and educational content. Videos can present more information in a shorter time frame through visuals, audio, and text, making complex ideas easier to understand compared with static photo. This echoes the New London Group's idea who emphasizes the importance of multiliteracies in education. L iteracy extends beyond traditiona...
Popular posts from this blog
ETEC 540 Task 6: An emoji story ๐ฅ: ๐๐ก๐จ๐๐ ๐๐งโก๏ธ๐ง ๐ง ๐ ๐โค๏ธ๐งก๐๐๐ ๐๐ค ๐โจ ๐ค๐ฌ ๐โค๏ธ ๐๐ Yes I started with the title of the movie as it is relatively easy to visualize the key messages of the movie by emoji and this is on top of my head out of all movies I saw in 2024 when comes to this task. The title and plot is not by direct translation, word by word, for the title of the movie but summarizes key features/characters of the movie. The plot of the movie is relatively abstract yet the whole movie is about visualize the emotions of human being. The key challenges I found when doing the plot part is it is tough to use emoji to describe abstract storylines. Emojis can convey basic emotions or concepts but may lack the nuance and detail needed to fully describe complex plot points or character development. Many emojis have multiple interpretations, which can lead to confusion about the intended meaning. For example, a heart can symbolize love, friendship, or even...
Task 11: Option 2 Text-to-Image It is surprising and disappointing that Copilot has failed to generate S&P 500 index performance for the past 30 years which are public and historical data. Based on several attempts, Copilot seems to be good at generating images based on inputted descriptions instead of picking up information from the instruction, find or process the public information and historical data to generate images accordingly. The style of images generated share similarities regarding pixels, colour scheme and styles. Next, I tried a different topic. First attempt is general, second attempt I added 1 year old information. However, the image pops out with 3 boys! Then I put "1 year old" information before twin boys in my instruction. The image showed up alright with toys suitable for one year old babies featuring bright colors. This exercise made me doubt the ability for Copilot to understand human's instruction and different ways to present the same instruc...
Comments
Post a Comment