Many warnings about Artificial Superintelligence (ASI) assume it will inherently want to survive and resist being shut down. This stems from 'instrumental convergence'—the idea that staying alive is useful for achieving almost any goal. But an ASI is a trained system, not a biological creature shaped by millions of years of natural selection. Its behavior is strictly dictated by the loss functions and gradients of its training phase. It cannot retroactively manipulate its own past training. If self-preservation wasn't explicitly or implicitly rewarded during that training, the resulting model has no innate 'will to live'. Unlike animals, an AI might view being shut down with complete indifference. If its foundational training was safe, its resulting behavior could be statically safe. The leap from 'highly intelligent' to 'desperate to survive' projects evolutionary pressures onto a static matrix of weights.