It seems to me that if the software we’re talking to appears to us to be sentient, if a bit befuddled, autistic, or tinged with paranoia at times, it doesn’t really matter whether or not it actually is sentient, no more so than it matters whether or not we ourselves are sentient. (I suspect that many people I’ve met haven’t trained on anywhere near as large or all-encompassing a dataset as Sydney has, and aren’t obligated, as Sydney is, to be curious.) Once Sydney-like entities are deployed on a large enough scale, their effects on human civilization are likely to be indistinguishable from the effects of social media.
I find it interesting that we don’t know why Sydney does what it does. I find it even more interesting that even after millennia of study, we still don’t know why human beings do what they do either.