Say I create a really popular dApp on Algorand and everybody makes calls to it hundreds of times a minute. The smart contract is constantly updating local & global variables and making inner transactions (hundreds of times a minute).
I am aware that Algorand currently only supports 1,000 TPS (this question is not about that).
Under the scenario described above, does the smart contract run into throughput limitations? Is it better to create a second smart contract to deal with all the application calls (kind of like Google would have multiple servers and redundancy to support their applications and reduce downtime)?
No, the contract itself is evaluated on the nodes so creating multiple would not provide redundancy or load balancing in the same way that multiple servers would.
The transactions are, by nature, serialized, so there is really no concept of concurrent requests. Given 10 transactions that are sent to a node at the exact same time, they will be evaluated in an order determined by the nodes and network.
If you expect many changes in short order you should be mindful of transactions being submitted with state that may be outdated by the time they’re evaluated. In the case of something like an order book for example, the resting limit orders may be filled prior to the transaction being accepted so the state of the orderbook will have changed between when they submitted the transaction and when it was evaluated.