-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help on slow performance in postgres #78
Comments
Hello @rajkamal0610 , Usually the quickest way to try to optimize the performance is by ensuring that nothing is returned from the BD if not necessary (not returned the One easy way is by using the BulkInsertOptimized which is very similar to using the So, one of the two following methods/options: context.BulkInsert(customers, options => options.AutoMapOutputDirection = false);
context.BulkInsertOptimized(customers); If you need more assistance, surely providing a project will help us, as we can just make a very high-level recommendation at the moment. You can send the project in private here: [email protected] Let me know if that answers your question. Best Regards, Jon |
Thanks, will try and reply. Is the option you have mentioned is applicable for .Net framework? Also, if possible can you provide the uses of useparallel, parallelbatchbytask, parallelminrecords Thanks and regards, |
Hello @rajkamal0610 ,
Yes
Honestly, I would not recommend using our Best Regards, Jon |
So, the parallel options will not give much performance for POSTGRES also right Thanks and regards |
Hello @rajkamal0610 , I just noticed that you are using directly our Bulk Operations library and I provided you some links for our Entity Framework Extensions library. You will not find the BulkInsertOptimized method in this library.
I do not believe because you are normally limited. When we made our tests years ago, the advantage was very small and I believe only MySQL/MariaDB had performance gain... but even then, it was very small and not worth it. Having multiple threads heavily inserting in the same table is probably never a good idea. Best Regards, Jon |
Hello @rajkamal0610, Since our last conversation, we haven't heard from you. Let me know if you have any questions. Best regards, Jon |
We have tested but not much improvement. Let us send you the sample project Thanks and regards |
Hello @rajkamal0610 , Since our last conversation, we haven't heard from you. Did you get the time to prepare a sample project? Looking forward to hearing from you. Best regards, Jon |
Hi, Struck with other works. Will close this for now Once I read with sample project will reinstate the issue Regards and Thanks |
500k records 30 columns insertion taking 40sec
5M records 15 columns insertion taking 2.5Mins
10M records 30columns insertion taking 16Mins
Is there a way to fasten the performance.
Hint:
We are processing the bulk operation using quartz scheduler in windows service ( it's a continuous processing )
The text was updated successfully, but these errors were encountered: