Собрали в одном месте самые важные ссылки
консультируем про IT, Python
You have a file with data you want to process with Pandas, and you want to make sure you won’t run out of memory. How do you estimate memory usage given the file size? At times you may see estimates like these: “Have 5 to 10 times as much RAM as the size of your dataset”, or “several times the size of your dataset”, or 2×-3× the size of the dataset. All of these estimates can both under- and over-estimate memory usage, depending on the situation. In fact, I will go so far as to say that estimating memory usage is just not worth doing. In particular, this article will: Demonstrate the very broad range of memory usage you will see just from loading the data, before any processing is done. Cover alternative approaches to estimation: measurement and streaming.
Как работают интерпретаторы и что такое "виртуальная машина" в этом контексте. И как ускорить исполнение.
The problem started when I had two classes that needed to talk to each other. Sometimes, classes need to talk to each other in both directions. The following example is made up, but mostly behaves like the original problem. Let’s say I have a Director and an Actor. The Director tells the Actor to do_action(). In order to do the action, the Actor needs to get_data() from the Director. Here’s our director.
There is an area of Python that many developers have problems with. This is an area that has seen many different solutions pop up over the years, with many different opinions, wars, and attempts to solve it. Many have complained about the packaging ecosystem and tools making their lives harder. Many beginners are confused about virtual environments. But does it have to be this way?
Как Polars обеспечивает "ленивую" работу с данными и экономит на памяти больше чем Pandas
Настройте pre-commit Yelp/detect-secrets в своем проекте, чтобы не пополнить список
Как написать desktop приложение с Django