Hello all, Our company is facing an issue with very large kayako db size. Currently we have 200000+ tickets, and it's number growing rapidly, say 1500 per day. I tried to find an advice using email support, email@example.com, though answers were absolutely non-competent. 1. The first suggestion was to remove old closed tickets by hands using staff interface. Wtf, do I have to do the same action 4000 times if I have 50 tickets per page? Even if I increase this value it's still unbearable amount of routine work. 2. The second. Don't know how to comment it. They suggested this way: DELETE FROM swtickets WHERE dateline < 1307730600; DELETE FROM swticketposts WHERE dateline < 1307730600; What about other related to the swtickets tables? There are 17 (!!!) other tables that have relations with swtickets. And the developer that suggested me this solution really works in your company? 3. My own solution - REST API. I've created a script that fetches all tickets from the given department and then delete the old ones. Guys, your API is a crap. There is no way to limit the output, can you imagine a time that needs for fetch, parse and build xml response for 100K tickets? Alright, I've changed your code and set a hard limit. There is no way to remove several tickets at once. And deleting tickets via API is SLOOOW and our db/server hangs up again. -------------------------------------------- So my final question is - what is the best way to purge old tickets? I've written sql that remove data from the linked tables first, and then from swtickets ('SELECT' query for the test purpose). SELECT A.* FROM_UNIXTIME(A.dateline) FROM swticketposts A LEFT JOIN swtickets B USING(ticketid) WHERE (B.ticketstatusid = 3) AND (B.dateline + (86400 * 90) < UNIX_TIMESTAMP(NOW())) ORDER BY B.dateline LIMIT 10; SELECT * FROM swtickets WHERE (ticketstatusid = 3) AND (dateline + (86400 * 90) < UNIX_TIMESTAMP(NOW())) Any ideas?