The rapid development and attention towards generative artificial intelligence (AI) makes their trustworthiness a critical issue of societal importance. Diffusion models (DMs) are large generative AI models which can generate high quality images from text instructions. There are concerns about the trustworthiness of DMs in terms of privacy, fairness and explainability. It may over-memorize the training data, which causes vulnerabilities to privacy leakages. Without proper guidance, it may inherent social bias from the training data and generate harmful images towards unprivileged groups. It cannot explain why or how the images are generated based on the instructions. This project evaluates the trustworthiness of DMs and provides advanced solutions to address these issues. The results of this project can benefit decision-makers and practitioners in different areas such as health care, media, law, and education to adopt generative models to assist content creation and daily productivity.