/imagine

A quick post to share some ideas on getting going in prompting, using MidJourney as an example. Prompting is a new literacy, and already prompt engineering is becoming a sought-after skill.

The slides below give a quick overview on how to get started in MidJourney image generator, using prompts and parameters to get closer to what you imagine. The principles will transfer well enough to other tools; if you try them, you’ll get to know how different models are working.

Update [March 16, 2023]: MidJourneyV5 is live. Higher quality, but needs more detailed prompts. Use “–v 5” in the prompt.

The possibilities are endless, and there is a lot to learn. Click here to go full-screen.

Some quick & useful links:

Some alternatives to MidJourney:

  • PlaygroundAI has a StableDiffusion 1.5 & 2.1, and is free with login. DALL-E 2 is included, but needs a paid subscription.
  • Microsoft Designer has DALL-E 2, with waitlist.
  • Adobe Firefly has announced a waitlist (March 21, 2023), and is trained on commercially-safe images, with a commitment to reducing bias.

The Ethics of AI Image Generation

The emergence of these tools has generated controversy and legal cases related to the use of artists’ works to train diffusion models for image generation. As you learn these skills, take some time to consider the ethics of these models, and how you can model ethical use in AI tools.

Learning From Others

There are lots of great creators on Twitter sharing their prompts and experiments, usually with the prompts in alt-text of images.

I learned a new term from @PipCleaves: Knolling. I don’t know why, but it is so aesthetically pleasing.

Some of my efforts:

Posted

in

,

by

Tags:

Comments

Thank-you for your comments.