The modeling of hair is too difficult to simulate because of its number and shape, as well as the texture of the hair itself. The traditional way of constructing hair based on physics and geometry requires complex calculations and various parameters. In recent years, hair modeling methods based on single images, based on multiple images, and based on videos have begun to develop. The advantage is that modeling is fast. At present, the geometry of the hair is mainly represented by lines of three-dimensional points. In this paper, a three-dimensional multi-strip is used to represent the geometry of the hair. Through deep learning, the position and type of the hair in a single image are obtained, and the similar hairstyle model in the database is matched. The selected hair model and head model are connected and fixed by further fitting. Then we simulate dynamic hair by setting gravity, friction, collision detection, and other more. The model preserves the image appearance of the image as much as possible and can be used to simulate common hair geometry.