But I would like to know if AlphaGo Zero can adapt to the way the oponent plays (oponent profile) or something like this.
That is not included in the algorithm as written, where the "profile" of the opponent is effectively AlphaGo Zero itself (learned through self play).
It is not clear whether adapting play style to a given opponent would offer any advantage. It would be difficult to assess because AlphaGo Zero is such a strong player, that it will win a large percentage of games against human players as-is. Seeking and measuring any improvement, except versus earlier versions of itself, would be quite hard.
However, there are a likely a few places in the code where learned play style of an opponent could in theory allow AlphaGo Zero to be more efficient. The most obvious is in the "rollout" policy (I'm not 100% sure if they use the same term), where the algorithm simulates and samples different possible trajectories through the game in order to predict likely outcomes.
The current rollout policy in AlphaGo is learned through self play. But it is just a neural network that predicts probability of making plays given board state. It could easily be adjusted in a supervised learning fashion, based on sampled plays from an opponent. If it could be learned accurately, then it should make searches more efficient and accurate - the impossible but ideal situation being that it predicted opponents' move exactly and thus could quickly find the ultimate counter to their actions. In fact the original AlphaGo rollout policy did model human play in this way. It was based on large database of many human master level play moves, not a single player. The Deep Mind team did suggest in their paper that this gave better results at the time than a self-play policy - they tried both and the human database was better. Since then, AlphaGo Zero has surpassed the performance of original AlphaGo without the database of human moves.