If you plug a 10BASE-T device into a 10/100 switch, just that one device's port will run at 10mbps. All the ports connected to the 100BASE-TX devices will still go at 100mbps. Regardless of whether the switch is doing cut-through or store-and-forward switching, it operates each port at that port's native speed. It's not like the switch has to cycle all the ports to renegotiate the link speed of all ports down to 10/Half just because a 10/Half client was transmitting a broadcast or multicast. That would be nuts.
If a server on 100BASE-TX needs to send a lot of data to a client on 10BASE-T, the sending server could fill up the client's 10mbit link, but the server would still be able to use the other 90% of its own link to transfer data to other devices.
I could dream up a pathologically bad switch design where the switch only has one small pool of frame buffers shared by all ports, and uses a pathologically bad algorithm for choosing which frames to drop when the buffers fill up, where if any one 100mbps device was sending a lot of data to the 10mbps device, it could fill up all the switch's buffers and keep them dominated by that one traffic flow, causing all other traffic flows across the switch to suffer. But again, that's just trying to dream up a worst-case scenario. It doesn't seem likely that anyone would create a switch that bad.