question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-16 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
72,112,776
2022-5-4
https://stackoverflow.com/questions/72112776/shap-value-plotting-error-on-databricks-but-works-locally
I want to do a simple shap analysis and plot a shap.force_plot. I noticed that it works without any issues locally in a .ipynb file, but fails on Databricks with the following error message: Visualization omitted, Javascript library not loaded! Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written. Code: import xgboost import shap shap.initjs() X, y = shap.datasets.boston() bst = xgboost.train({"learning_rate": 0.01}, xgboost.DMatrix(X, label=y), 100) explainer = shap.TreeExplainer(bst) shap_values = explainer.shap_values(X) shap.force_plot(explainer.expected_value, shap_values[0,:], X.iloc[0,:]) Is there any way to get the image plotting work on Databricks?
Let's try slightly different (matplotlib=True): import xgboost import shap X, y = shap.datasets.boston() bst = xgboost.train({"learning_rate": 0.01}, xgboost.DMatrix(X, label=y), 100) explainer = shap.TreeExplainer(bst) shap_values = explainer.shap_values(X) shap.force_plot( explainer.expected_value, shap_values[0,:], X.iloc[0,:], matplotlib=True # <-- )
5
23
72,162,458
2022-5-8
https://stackoverflow.com/questions/72162458/how-to-refer-to-self-in-pandas-subsetting
When I'm exploring data in an ad hoc way, I often have code like this: X = (adj_all.o.diff(1) / adj_none.o.diff(1)).diff(1) print(X[X > 0]) Is there a way to do this in a single line in an easy way? The following works but is verbose: (adj_all.o.diff(1) / adj_none.o.diff(1)).diff(1)[(adj_all.o.diff(1) / adj_none.o.diff(1)).diff(1) > 0] I want something like this: (adj_all.o.diff(1) / adj_none.o.diff(1)).diff(1)[self > 0] Note that this isn't production code. It is part of ad hoc exploration where iteration speed is important to results, which is why I wish to be able to do this common thing in a single line.
You can use pipe: (adj_all.o.diff(1) / adj_none.o.diff(1)).diff(1).pipe(lambda x: x[x>0])
4
4
72,160,981
2022-5-8
https://stackoverflow.com/questions/72160981/how-to-make-a-csv-row-for-each-2-lines-in-a-txt-file
I have a text file like this: Viruses/GCF_000820355.1_ViralMultiSegProj14361_genomic.fna.gz Sclerophthora macrospora virus A Viruses/GCF_000820495.2_ViralMultiSegProj14656_genomic.fna.gz Influenza B virus RNA Viruses/GCF_000837105.1_ViralMultiSegProj14079_genomic.fna.gz Tomato mottle virus And I need to get a csv file like this: Viruses/GCF_000820355.1_ViralMultiSegProj14361_genomic.fna.gz,Sclerophthora macrospora virus A Viruses/GCF_000820495.2_ViralMultiSegProj14656_genomic.fna.gz,Influenza B virus RNA Viruses/GCF_000837105.1_ViralMultiSegProj14079_genomic.fna.gz,Tomato mottle virus Because later I want to use this like a tuple to find the compressed file, read it and get a final file with names like: Viruses/GCF_000837105.1/Tomato mottle virus.fna I just need to learn how to do the first part of the problem. It could by with: sed awk R Python Any help would be very appreciated. This is hard for me to accomplish because the original filenames are very messed up. I have tried this: sed -z 's/\n/,/g;s/,$/\n/' multi_headers However it put comma in all \n.
Using any awk in any shell on every Unix box and only storing 1 line at a time in memory so it'll work no matter how large your input file is: $ awk '{ORS=(NR%2 ? "," : RS)} 1' file Viruses/GCF_000820355.1_ViralMultiSegProj14361_genomic.fna.gz,Sclerophthora macrospora virus A Viruses/GCF_000820495.2_ViralMultiSegProj14656_genomic.fna.gz,Influenza B virus RNA Viruses/GCF_000837105.1_ViralMultiSegProj14079_genomic.fna.gz,Tomato mottle virus There's a lot happening in a small amount of code above so here's an explanation: ORS is the builtin variable containing the string to be printed at the end of each output record (record = line in this case), a newline by default. RS is the builtin variable containing the string (or regexp) that separates each input record, a newline by default. NR is the builtin variable containing the current record/line number so NR%2 is 1 for odd numbered records and 0 for even numbered. NR%2 ? "," : RS is a ternary expression resulting in , for odd numbered lines, \n (or whatever else you have set RS to, e.g. \r\n) for even numbered. 1 is a true condition which causes the default action of printing the current record to be executed. So the above script says "if the current line number is odd print it with a , at the end, otherwise print it with a newline at the end", hence it's joining every pair of lines with a , between.
4
5
72,156,580
2022-5-7
https://stackoverflow.com/questions/72156580/azure-databricks-error-with-custom-library-on-cluster-in-vnet
We are using Azure Databricks with a single-node cluster in a VNet (Runtime Version 10.4 LTS). We also need to use a custom/private python module (wheel). After the library is installed on the cluster, everything is working fine, but after the cluster is restarted and the library installed, the following error appears on the execution of any cell (de-/reattaching doesn't solve the issue): + Failure starting repl. Try detaching and re-attaching the notebook. java.lang.Exception: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$2(HiveExternalCatalog.scala:160) at org.apache.spark.sql.hive.HiveExternalCatalog.maybeSynchronized(HiveExternalCatalog.scala:112) at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$1(HiveExternalCatalog.scala:150) at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:364) at com.databricks.spark.util.SparkDatabricksProgressReporter$.withStatusCode(ProgressReporter.scala:34) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:149) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:300) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:201) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:192) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:59) at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$resourceLoader$1(HiveSessionStateBuilder.scala:66) at org.apache.spark.sql.hive.HiveSessionResourceLoader.client$lzycompute(HiveSessionStateBuilder.scala:160) at org.apache.spark.sql.hive.HiveSessionResourceLoader.client(HiveSessionStateBuilder.scala:160) at org.apache.spark.sql.hive.HiveSessionResourceLoader.$anonfun$addJar$1(HiveSessionStateBuilder.scala:164) at org.apache.spark.sql.hive.HiveSessionResourceLoader.$anonfun$addJar$1$adapted(HiveSessionStateBuilder.scala:163) at scala.collection.immutable.List.foreach(List.scala:431) at org.apache.spark.sql.hive.HiveSessionResourceLoader.addJar(HiveSessionStateBuilder.scala:163) at org.apache.spark.sql.execution.command.AddJarsCommand.$anonfun$run$1(resources.scala:33) at org.apache.spark.sql.execution.command.AddJarsCommand.$anonfun$run$1$adapted(resources.scala:33) at scala.collection.immutable.Stream.foreach(Stream.scala:533) at org.apache.spark.sql.execution.command.AddJarsCommand.run(resources.scala:33) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:78) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:89) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:160) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:209) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:356) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:160) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:958) at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:115) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:306) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:160) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:156) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:565) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:167) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:565) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:541) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:156) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:156) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:141) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:132) at org.apache.spark.sql.Dataset.&lt;init&gt;(Dataset.scala:225) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:104) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:958) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:101) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:793) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:958) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:788) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:695) at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$new$3(DriverLocal.scala:267) at com.databricks.sql.acl.CheckPermissions$.trusted(CheckPermissions.scala:1566) at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$new$2(DriverLocal.scala:267) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at com.databricks.backend.daemon.driver.DriverLocal.&lt;init&gt;(DriverLocal.scala:250) at com.databricks.backend.daemon.driver.PythonDriverLocalBase.&lt;init&gt;(PythonDriverLocalBase.scala:152) at com.databricks.backend.daemon.driver.PythonDriverLocal.&lt;init&gt;(PythonDriverLocal.scala:73) at com.databricks.backend.daemon.driver.PythonDriverWrapper.instantiateDriver(DriverWrapper.scala:697) at com.databricks.backend.daemon.driver.DriverWrapper.setupRepl(DriverWrapper.scala:335) at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:224) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.Throwable: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1169) at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1154) at org.apache.spark.sql.hive.client.Shim_v0_12.databaseExists(HiveShim.scala:619) at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:435) at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23) at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:335) at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$retryLocked$1(HiveClientImpl.scala:236) at org.apache.spark.sql.hive.client.HiveClientImpl.synchronizeOnObject(HiveClientImpl.scala:272) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:228) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:315) at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:435) at org.apache.spark.sql.hive.client.PoolingHiveClient.$anonfun$databaseExists$1(PoolingHiveClient.scala:321) at org.apache.spark.sql.hive.client.PoolingHiveClient.$anonfun$databaseExists$1$adapted(PoolingHiveClient.scala:320) at org.apache.spark.sql.hive.client.PoolingHiveClient.withHiveClient(PoolingHiveClient.scala:149) at org.apache.spark.sql.hive.client.PoolingHiveClient.databaseExists(PoolingHiveClient.scala:320) at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:300) at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80) at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$2(HiveExternalCatalog.scala:151) ... 70 more Caused by: java.lang.Throwable: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.&lt;init&gt;(RetryingMetaStoreClient.java:62) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465) at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1165) ... 88 more Caused by: java.lang.Throwable: null at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410) ... 93 more Caused by: java.lang.Throwable: Error creating transactional connection factory at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:671) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:830) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:334) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:213) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:331) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:360) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:269) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:244) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:139) at org.apache.hadoop.hive.metastore.RawStoreProxy.&lt;init&gt;(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.&lt;init&gt;(HiveMetaStore.java:356) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.&lt;init&gt;(RetryingHMSHandler.java:54) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59) at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.&lt;init&gt;(HiveMetaStoreClient.java:171) ... 98 more Caused by: java.lang.Throwable: null at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:330) at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:203) at org.datanucleus.store.AbstractStoreManager.&lt;init&gt;(AbstractStoreManager.java:162) at org.datanucleus.store.rdbms.RDBMSStoreManager.&lt;init&gt;(RDBMSStoreManager.java:285) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) at org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133) at org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:422) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:817) ... 127 more Caused by: java.lang.Throwable: Attempt to invoke the &quot;HikariCP&quot; plugin to create a ConnectionPool gave an error : Failed to initialize pool: Could not connect to address=(host=prod-metastore.mysql.database.azure.com)(port=3306)(type=master) : prod-metastore.mysql.database.azure.com at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:232) at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:117) at org.datanucleus.store.rdbms.ConnectionFactoryImpl.&lt;init&gt;(ConnectionFactoryImpl.java:82) ... 145 more Caused by: java.lang.Throwable: Failed to initialize pool: Could not connect to address=(host=prod-metastore.mysql.database.azure.com)(port=3306)(type=master) : prod-metastore.mysql.database.azure.com at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:512) at com.zaxxer.hikari.pool.HikariPool.&lt;init&gt;(HikariPool.java:105) at com.zaxxer.hikari.HikariDataSource.&lt;init&gt;(HikariDataSource.java:71) at org.datanucleus.store.rdbms.connectionpool.HikariCPConnectionPoolFactory.createConnectionPool(HikariCPConnectionPoolFactory.java:176) at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:213) ... 147 more Caused by: java.lang.Throwable: Could not connect to address=(host=prod-metastore.mysql.database.azure.com)(port=3306)(type=master) : prod-metastore.mysql.database.azure.com at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:175) at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.connException(ExceptionMapper.java:83) at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1111) at org.mariadb.jdbc.internal.util.Utils.retrieveProxy(Utils.java:502) at org.mariadb.jdbc.MariaDbConnection.newConnection(MariaDbConnection.java:155) at org.mariadb.jdbc.Driver.connect(Driver.java:86) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:95) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:101) at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:341) at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:506) ... 151 more Caused by: java.lang.Throwable: prod-metastore.mysql.database.azure.com at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:607) at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connect(AbstractConnectProtocol.java:445) at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1103) ... 158 more This is independent if the custom module is imported/used, even if the custom library has no real code inside. Modules from PyPI work fine, though. We could narrow it down to the activation of the VNet feature. Are there configurations or script to bypass this?
You need to make sure that you have configure Network Security Group rules according to documentation, specifically that you don't block traffic on port 3306. You also need to check that your user-defined routes or firewall are configured correctly and don't block outgoing traffic to the built-in Hive metastore - the host name & IP address are specific to the region and could be found in the documentation.
5
2
72,156,750
2022-5-7
https://stackoverflow.com/questions/72156750/telegram-bot-to-send-auto-message-every-n-hours-with-python-telegram-bot
I am quite new in building bots so I created a very simple Telegram bot and it works great but can't figure out how to make the bot send messages every n minutes or n hours when /start_auto command is initiated. I made a workaround with while loop but it looks stupid and during the loop users won't be able to interact with the bot about other topics. I want users to be able to start and stop this scheduled task by commands such as /start_auto and /stop_auto. I know there are many other answered questions related to this topic but none of them seem to be working with my code. import logging import os import time from telegram.ext import Updater, CommandHandler, MessageHandler, Filters logger = logging.getLogger(__name__) PORT = int(os.environ.get('PORT', '8443')) def start(update, context): """Sends a message when the command /start is issued.""" update.message.reply_text('Hi!') def help(update, context): """Sends a message when the command /help is issued.""" update.message.reply_text('Help!') def start_auto(update, context): """Sends a message when the command /start_auto is issued.""" n = 0 while n < 12: time.sleep(3600) update.message.reply_text('Auto message!') n += 1 def error(update, context): """Logs Errors caused by Updates.""" logger.warning('Update "%s" caused error "%s"', update, context.error) def main(): TOKEN = 'TOKEN_GOES_HERE' APP_NAME = 'https://my-tele-test.herokuapp.com/' updater = Updater(TOKEN, use_context=True) # Get the dispatcher to register handlers dp = updater.dispatcher dp.add_handler(CommandHandler("start", start)) dp.add_handler(CommandHandler("help", help)) dp.add_handler(CommandHandler("start_auto", start_auto)) # log all errors dp.add_error_handler(error) updater.start_webhook(listen="0.0.0.0", port=PORT, url_path=TOKEN, webhook_url=APP_NAME + TOKEN) updater.idle() if __name__ == '__main__': main()
will post a solution which I found: def callback_auto_message(context): context.bot.send_message(chat_id='12345678', text='Automatic message!') def start_auto_messaging(update, context): chat_id = update.message.chat_id context.job_queue.run_repeating(callback_auto_message, 10, context=chat_id, name=str(chat_id)) # context.job_queue.run_once(callback_auto_message, 3600, context=chat_id) # context.job_queue.run_daily(callback_auto_message, time=datetime.time(hour=9, minute=22), days=(0, 1, 2, 3, 4, 5, 6), context=chat_id) def stop_notify(update, context): chat_id = update.message.chat_id context.bot.send_message(chat_id=chat_id, text='Stopping automatic messages!') job = context.job_queue.get_jobs_by_name(str(chat_id)) job[0].schedule_removal() And in the main function I created a commands: dp.add_handler(CommandHandler("auto", start_auto_messaging)) dp.add_handler(CommandHandler("stop", stop_notify))
5
6
72,155,476
2022-5-7
https://stackoverflow.com/questions/72155476/is-this-greedy-behavior-of-lists-guaranteed
I occasionally use the "trick" to extend a list by a mapped version of itself, for example to efficiently compute powers of 2: from operator import mul powers = [1] powers += map(mul, [2] * 10, powers) print(powers) # prints [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024] This relies on the += immediately appending each value from map to the list, so that the map then finds it and the procedure continues. In other words, it needs to work like this: powers = [1] for value in map(mul, [2] * 10, powers): powers.append(value) And not first compute and store the whole right-hand side like this, where powers ends up being [1, 2]: powers = [1] powers += list(map(mul, [2] * 10, powers)) Is it somewhere guaranteed that it works like it does? I checked the Mutable Sequence Types documentation but it doesn't say much about it other than implying equivalence of s += t and s.extend(t). It does refer to MutableSequence, whose source code includes this: def extend(self, values): 'S.extend(iterable) -- extend sequence by appending elements from the iterable' if values is self: values = list(values) for v in values: self.append(v) def __iadd__(self, values): self.extend(values) return self This does suggest that it's indeed supposed to work like it does and like I want it, but some source code being what it is doesn't feel as safe as a guarantee in the documentation.
I don't see any tests or docs that the greedy behavior is guaranteed; however, I do think it is the expected behavior and that code in the wild relies on it. FWIW, += with lists is equivalent to list.extend(), so your "trick" boils down to: >>> powers = [1] >>> powers.extend(2*x for x in islice(powers, 10)) >>> powers [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024] While I haven't found a guarantee for += or extend, we do have a guarantee that the list iterator allows mutation while iterating.¹ So, this code is on firm ground: >>> powers = [1] >>> for x in powers: if len(powers) == 10: break powers.append(2 * x) >>> powers [1, 2, 4, 8, 16, 32, 64, 128, 256, 512] ¹ See the second paragraph following the table at: https://docs.python.org/3/library/stdtypes.html#common-sequence-operations: Forward and reversed iterators over mutable sequences access values using an index. That index will continue to march forward (or backward) even if the underlying sequence is mutated. The iterator terminates only when an IndexError or a StopIteration is encountered (or when the index drops below zero).
15
7
72,151,781
2022-5-7
https://stackoverflow.com/questions/72151781/how-can-i-get-a-raspberry-pi-pico-to-communicate-with-a-pc-external-devices
For example when I give 5 to the code, I want to turn on the LED in our RPi pico (connected to a PC via a cable). #This code will run in my computer (test.py) x=int(input("Number?")) if (x==5): #turn on raspberry pi pico led The code of the RPi pico: #This code will run in my rpi pico (pico.py) from machine import Pin led = Pin(25, Pin.OUT) led.value(1) Or vice versa (doing something in the code on the computer with the code in the RPi pico). How can I call/get a variable in the PC to the RPi pico? Note: I am writing code with OpenCV Python and I want to process the data from my computer's camera on my computer. I want the RPi pico to react according to the processed data.
A simple method of communicating between the host and the Pico is to use the serial port. I have a rp2040-zero, which presents itself to the host as /dev/ttyACM0. If I use code like this on the rp2040: import sys import machine led = machine.Pin(24, machine.Pin.OUT) def led_on(): led(1) def led_off(): led(0) while True: # read a command from the host v = sys.stdin.readline().strip() # perform the requested action if v.lower() == "on": led_on() elif v.lower() == "off": led_off() Then I can run this on the host to blink the LED: import serial import time # open a serial connection s = serial.Serial("/dev/ttyACM0", 115200) # blink the led while True: s.write(b"on\n") time.sleep(1) s.write(b"off\n") time.sleep(1) This is obviously just one-way communication, but you could of course implement a mechanism for passing information back to the host.
4
11
72,152,748
2022-5-7
https://stackoverflow.com/questions/72152748/time-it-takes-to-square-in-python
I was wondering whether x**2 or x*x is faster def sqr(x): for i in range (20): x = x**2 return x def sqr_(x): for i in range (20): x = x*x return x When I time it, this is what I get: The time it takes for x**2: 101230500 The time it takes for x*x: 201469200 I have tried it many many times, they are either equal, or x ** 2 is faster than x * x. But x*x is never faster than x**2. So I disassembled the code: For x**2: 5 12 LOAD_FAST 0 (x) 14 LOAD_CONST 2 (2) 16 BINARY_POWER 18 STORE_FAST 0 (x) 20 JUMP_ABSOLUTE 8 For x*x: 9 12 LOAD_FAST 0 (x) 14 LOAD_FAST 0 (x) 16 BINARY_MULTIPLY 18 STORE_FAST 0 (x) 20 JUMP_ABSOLUTE 8 Is it about load_const being slightly faster than load_fast? LOAD_CONST: takes the literal value at index 1 of co_consts and pushes it LOAD_FAST is accessing the value in an array by index Or binary_power is faster than binary_multiply (I actually don't know the binary_power algorithm)?
For small integers, x*x is significantly faster than x**2 since CPython does a lot more operation internally to compute a**b. Actually, on my machine x*x is 4 times faster (processor i5-9600KF, CPython 3.8.1, on Windows). That being said, in you code, numbers grows very quickly and Python integers are unbounded. In fact, each exponentiation cause the binary representation to be twice bigger. The exponents are multiplied together resulting in the computation of x**(2*2*2*...*2) = x**(2**20) = x**1048576. For big x=2, the number takes 128 KiB in memory and for x=100 it takes 850 KiB. This is pretty big. Each iteration of your loop is bounded by the computation of such huge numbers in memory. As a result, for large numbers, x*x and x**2 are as fast because the same internal computation is done for both cases and the overhead of the CPython interpreter becomes negligible compared to the computation of the huge integers. Under the hood Internally, CPython appears to use _PyNumber_PowerNoMod which calls PyNumber_Power which calls ternary_op, and PyNumber_Multiply which calls binary_op1. Note that CPython is not optimized to compute x**2: internally CPython compute pow(x, 2, None) which is the function to compute a modular exponentiation (though the call to pow is a bit less efficient way as CPython has to check pow has not been overwritten). This modular exponentiation function is much more expensive since it is a very generic function compared to x * x. In the end, it appears long_mul and long_pow are called in your case (note that long_pow calls long_mul internally so long_pow actually need to compute more instructions). For large numbers, CPython uses the Karatsuba multiplication (see: k_mul). Note that CPython is actually very slow in both cases since it takes several nanoseconds (at least on my machine) and performs dozens of checks and many function calls just to multiply two integers. This can be done natively in only 1 cycle for 64-bit integers on mainstream x86-64 processors. Large integers cannot be natively computed by mainstream processor and require a much more expensive computation.
6
6
72,145,492
2022-5-6
https://stackoverflow.com/questions/72145492/conda-init-without-closing-the-current-shell
There are a number of use cases for which I am trying to use conda . The main headache is that conda init just does not want to play fair within the flow of a script in the same bash shell. I frequently see CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. That happens even though the contents of the conda initialize script as well as conda init have been executed. The results are inconsistent: sometimes I have seen the init logic work, others not. I have not been able to ascertain what magic is happening and what it expects in order to work properly.. conda_init() { # __conda_setup="$('$CONDA_DIR/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"; source $CONDA_DIR/etc/profile.d/conda.sh export PATH="$CONDA_DIR/condabin:$CONDA_DIR/bin:$PATH"; export CONDARC=$CONDA_DIR ; conda init bash } conda_init conda activate py38 That gives us /Users/steve/opt/miniconda3/bin/conda /Users/steve/opt/miniconda3/bin::/Users/steve/opt/miniconda3/condabin:<other stuff..> CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init <SHELL_NAME> Currently supported shells are: - bash - fish - tcsh - xonsh - zsh - powershell See 'conda init --help' for more information and options. IMPORTANT: You may need to close and restart your shell after running 'conda init'. How can that conda_init() be made reliable for being able to run conda init in the same shell?
Generally, one should not need to "activate" an environment when working programmatically. That's what conda run is for... conda run -n py38 python my_script.py Otherwise, if CONDA_DIR is defined, then the following would run the initialization shell commands in an active bash session: eval "$(${CONDA_DIR}/bin/conda shell.bash hook 2> /dev/null)"
5
7
72,144,371
2022-5-6
https://stackoverflow.com/questions/72144371/how-to-fix-tiktok-selenium-robot-detection
How to fix TikTok selenium robot detection Background-Info I'm creating a python selenium bot to do things on the TikTok website. The user will log in manually so the website detecting mouse movement and typing speed is irrelevant.The issue is, is that I can't log in while using selenium What I've tried I've tried logging in normally without selenium in incognito mode on chrome with the same Mac address, IP address, and same login details (Which worked!!) I've tried using random user agents in selenium (Which didn't Work) I've tried adding the following chrome options options.add_argument("start-maximized") # Chrome is controlled by automated test software options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('useAutomationExtension', False) # avoiding detection options.add_argument('--disable-blink-features=AutomationControlled') What I want I want to be able to log in without TikTok saying Too many log-in attempts. Try again later and for more clarification, I can log in normally without selenium same everything and it works it just doesn't work while in selenium. Heres is the code for starting selenium post = "https://www.tiktok.com/@smoothmovesranch/video/7091224442243681579" myProxy = "" #configuration options = Options() prox = Proxy() prox.proxy_type = ProxyType.MANUAL prox.http_proxy = myProxy prox.ssl_proxy = myProxy capabilities = webdriver.DesiredCapabilities.CHROME prox.add_to_capabilities(capabilities) options.add_argument("window-size=1400,600") options.add_argument("--incognito") driver = webdriver.Chrome(executable_path = os.path.join(os.getcwd(), 'chromedriver'), options=options) #opens tiktok login page driver.get('https://www.tiktok.com/login/phone-or-email/email')
A few things that might help: Make sure your proxy is changing during every login attempt. For every instance of a new login create a new webdriver environment either with the same proxy or a new one. Add random wait times. For example instagram will restrict accounts that they suspect of botting. To fix this one solution is to make the selenium instance preform different clicking actions at different times. i.e. having a wait time fluctuating between a few seconds can do the trick. Also this code may help with too many login attempt issues. In short it helps selenium disguise itself better to the website servers when navigating the site. user_agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.50 Safari/537.36' options.add_argument('user-agent={0}'.format(user_agent))
4
2
72,140,531
2022-5-6
https://stackoverflow.com/questions/72140531/flatten-xml-data-as-a-pandas-dataframe
How can I convert this XML file at this address into a pandas dataframe? I have downloaded the XML as a file and called it '058com.xml' and run the code below, though the last column of the resulting dataframe is a mess of data arranged as multiple OrderedDict. The XML structure seems complex and is beyond my knowledge. json_normalize documentation left me confused. How can I improve the code to fully flatten the XML ? import pandas as pd import xmltodict rawdata = '058com.xml' with open(rawdata) as fd: doc = xmltodict.parse(fd.read(), encoding='ISO-8859-1', process_namespaces=False) pd.json_normalize(doc['Election']['Departement']['Communes']['Commune']) Ideally the dataframe should look like ID's, names for geographic entities and vote results and names of election candidates. The final dataframe should contain a lot of columns when fully flatten and is expected to be very close of the CSV below. I pasted the headers and the first line in the form of a .csv (semi-colon separated) as a resentative sample of what the dataframe should look like Code du département;Libellé du département;Code de la commune;Libellé de la commune;Etat saisie;Inscrits;Abstentions;% Abs/Ins;Votants;% Vot/Ins;Blancs;% Blancs/Ins;% Blancs/Vot;Nuls;% Nuls/Ins;% Nuls/Vot;Exprimés;% Exp/Ins;% Exp/Vot;N°Panneau;Sexe;Nom;Prénom;Voix;% Voix/Ins;% Voix/Exp 01;Ain;001;L'Abergement-Clémenciat;Complet;645;108;16,74;537;83,26;16;2,48;2,98;1;0,16;0,19;520;80,62;96,83;1;F;ARTHAUD;Nathalie;3;0,47;0,58;2;M;ROUSSEL;Fabien;6;0,93;1,15;3;M;MACRON;Emmanuel;150;23,26;28,85;4;M;LASSALLE;Jean;18;2,79;3,46;5;F;LE PEN;Marine;149;23,10;28,65;6;M;ZEMMOUR;Éric;43;6,67;8,27;7;M;MÉLENCHON;Jean-Luc;66;10,23;12,69;8;F;HIDALGO;Anne;5;0,78;0,96;9;M;JADOT;Yannick;30;4,65;5,77;10;F;PÉCRESSE;Valérie;26;4,03;5,00;11;M;POUTOU;Philippe;3;0,47;0,58;12;M;DUPONT-AIGNAN;Nicolas;21;3,26;4,04
Since the URL really contains two data sections under each <Tour>, specifically <Mentions> (which appear to be aggregate vote data) and <Candidats> (which are granular person-level data) (pardon my French), consider building two separate data frames using the new IO method, pandas.read_xml, which supports XSLT 1.0 (via the third-party lxml package). No migration to dictionaries for JSON handling. As a special purpose language written in XML, XSLT can transform your nested structure to flatter format for migration to data frame. Specifically, each stylesheet drills down to the most granular node and then by the ancestor axis pulls higher level information as sibling columns. Mentions (save as .xsl, a special .xml file or embed as string in Python) <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes"/> <xsl:strip-space elements="*"/> <xsl:template match="/"> <Tours> <xsl:apply-templates select="descendant::Tour/Mentions"/> </Tours> </xsl:template> <xsl:template match="Mentions/*"> <Mention> <xsl:copy-of select="ancestor::Election/Scrutin/*"/> <xsl:copy-of select="ancestor::Departement/*[name()!='Communes']"/> <xsl:copy-of select="ancestor::Commune/*[name()!='Tours']"/> <xsl:copy-of select="ancestor::Tour/NumTour"/> <Mention><xsl:value-of select="name()"/></Mention> <xsl:copy-of select="*"/> </Mention> </xsl:template> </xsl:stylesheet> Python (read directly from URL) url = ( "https://www.resultats-elections.interieur.gouv.fr/telechargements/" "PR2022/resultatsT1/027/058/058com.xml" ) mentions_df = pd.read_xml(url, stylesheet=mentions_xsl) Output Type Annee CodReg CodReg3Car LibReg CodDpt CodMinDpt CodDpt3Car LibDpt CodSubCom LibSubCom NumTour Mention Nombre RapportInscrit RapportVotant 0 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 Inscrits 105 None None 1 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 Abstentions 24 22,86 None 2 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 Votants 81 77,14 None 3 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 Blancs 2 1,90 2,47 4 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 Nuls 0 0,00 0,00 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 1849 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 Abstentions 13 14,94 None 1850 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 Votants 74 85,06 None 1851 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 Blancs 1 1,15 1,35 1852 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 Nuls 0 0,00 0,00 1853 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 Exprimes 73 83,91 98,65 [1854 rows x 16 columns] Candidats (save as .xsl, a special .xml file or embed as string in Python) <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes"/> <xsl:strip-space elements="*"/> <xsl:template match="/"> <Candidats> <xsl:apply-templates select="descendant::Tour/Resultats/Candidats"/> </Candidats> </xsl:template> <xsl:template match="Candidat"> <xsl:copy> <xsl:copy-of select="ancestor::Election/Scrutin/*"/> <xsl:copy-of select="ancestor::Departement/*[name()!='Communes']"/> <xsl:copy-of select="ancestor::Commune/*[name()!='Tours']"/> <xsl:copy-of select="ancestor::Tour/NumTour"/> <xsl:copy-of select="*"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Python (read directly from URL) url = ( "https://www.resultats-elections.interieur.gouv.fr/telechargements/" "PR2022/resultatsT1/027/058/058com.xml" ) candidats_df = pd.read_xml(url, stylesheet=candidats_xsl) Output Type Annee CodReg CodReg3Car LibReg CodDpt CodMinDpt CodDpt3Car LibDpt CodSubCom LibSubCom NumTour NumPanneauCand NomPsn PrenomPsn CivilitePsn NbVoix RapportExprime RapportInscrit 0 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 1 ARTHAUD Nathalie Mme 0 0,00 0,00 1 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 2 ROUSSEL Fabien M. 3 3,80 2,86 2 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 3 MACRON Emmanuel M. 14 17,72 13,33 3 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 4 LASSALLE Jean M. 2 2,53 1,90 4 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 1 Achun 1 5 LE PEN Marine Mme 28 35,44 26,67 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 3703 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 8 HIDALGO Anne Mme 0 0,00 0,00 3704 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 9 JADOT Yannick M. 4 5,48 4,60 3705 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 10 PÉCRESSE Valérie Mme 6 8,22 6,90 3706 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 11 POUTOU Philippe M. 1 1,37 1,15 3707 Présidentielle 2022 27 27 Bourgogne-Franche-Comté 58 58 58 Nièvre 313 Vitry-Laché 1 12 DUPONT-AIGNAN Nicolas M. 4 5,48 4,60 [3708 rows x 19 columns] You can join resulting data frames using their shared Communes nodes: <CodSubCom> and <LibSubCom> but may have to pivot_table on the aggregate data for a one-to-many merge. Below demonstrates with Nombre aggregate: mentions_candidats_df = ( candidats_df.merge( mentions_df.pivot_table( index=["CodSubCom", "LibSubCom"], columns="Mention", values="Nombre", aggfunc="max" ).reset_index(), on=["CodSubCom", "LibSubCom"] ) ) mentions_candidats_df.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 3708 entries, 0 to 3707 Data columns (total 25 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Type 3708 non-null object 1 Annee 3708 non-null int64 2 CodReg 3708 non-null int64 3 CodReg3Car 3708 non-null int64 4 LibReg 3708 non-null object 5 CodDpt 3708 non-null int64 6 CodMinDpt 3708 non-null int64 7 CodDpt3Car 3708 non-null int64 8 LibDpt 3708 non-null object 9 CodSubCom 3708 non-null int64 10 LibSubCom 3708 non-null object 11 NumTour 3708 non-null int64 12 NumPanneauCand 3708 non-null int64 13 NomPsn 3708 non-null object 14 PrenomPsn 3708 non-null object 15 CivilitePsn 3708 non-null object 16 NbVoix 3708 non-null int64 17 RapportExprime 3708 non-null object 18 RapportInscrit 3708 non-null object 19 Abstentions 3708 non-null int64 20 Blancs 3708 non-null int64 21 Exprimes 3708 non-null int64 22 Inscrits 3708 non-null int64 23 Nuls 3708 non-null int64 24 Votants 3708 non-null int64 dtypes: int64(16), object(9) memory usage: 753.2+ KB In forthcoming pandas 1.5, read_xml will support dtypes to allow conversion after XSLT transformation in this case.
4
1
72,138,544
2022-5-6
https://stackoverflow.com/questions/72138544/pandas-calculate-difference-between-a-row-and-all-other-rows-and-create-column
We have data as below Name value1 Value2 finallist 0 cosmos 10 20 [10,20] 1 network 30 40 [30,40] 2 unab 20 40 [20,40] is there any way to do difference between all the rows Something final output like Name value1 Value2 finallist cosmos network unab 0 cosmos 10 20 [10,20] 0 40 30 1 network 30 40 [30,40] 40 0 10 2 unab 20 40 [20,40] 30 10 0 Data has different types of names and each name should be a column
You want the pairwise absolute difference of the sum of the values for each row. The easiest might be to use the underlying numpy array. absolute difference of the sum of the "value" columns # get sum of values per row and convert to numpy array a = df['value1'].filter(regex='(?i)value').sum(1).to_numpy() # compute the pairwise difference, create a DataFrame and join df2 = df.join(pd.DataFrame(abs(a-a[:,None]), columns=df['Name'], index=df.index)) output: Name value1 Value2 finallist cosmos network unab 0 cosmos 10 20 [10, 20] 0 40 30 1 network 30 40 [30, 40] 40 0 10 2 unab 20 40 [20, 40] 30 10 0
5
4
72,137,740
2022-5-6
https://stackoverflow.com/questions/72137740/how-to-replicate-pandas-dataframe-rows-and-change-periodically-one-column
I have df going like pd.DataFrame([["A1" "B1", "C1", "P"], ["A2" "B2", "C2", "P"], ["A3" "B3", "C3", "P"]], columns=["col_a" "col_b", "col_c", "col_d"]) col_a col_b col_c col_d A1 B1 C1 P A2 B2 C2 P A3 B3 C3 P ... the result I need is basically repeat and ensure that columns have P Q R extension in col_d for every unique row occurence col_a col_b col_c col_d A1 B1 C1 P A1 B1 C1 Q A1 B1 C1 R A2 B2 C2 P A2 B2 C2 Q A2 B2 C2 R A3 B3 C3 P A3 B3 C3 Q A3 B3 C3 R ... All I have so far is: new_df = pd.DataFrame(np.repeat(df.values, 3, axis=0), columns=df.columns) Which result in duplication of those values, but col_d is unchanged EDIT: Now I stumbled upon another need, where for every unique col_a and col_b I need to add "S" to col_d Resulting for instance in this: col_a col_b col_c col_d A1 B1 C1 P A1 B1 C1 Q A1 B1 C1 R A1 B1 T S A2 B2 C2 P A2 B2 C2 Q A2 B2 C2 R A2 B2 T S Thank you very much for help!
Add values to column col_d by DataFrame.assign with numpy.tile: L = ['P','Q','R'] new_df = (pd.DataFrame(np.repeat(df.values, 3, axis=0), columns=df.columns) .assign(col_d = np.tile(L, len(df)))) print (new_df) col_acol_b col_c col_d 0 A1B1 C1 P 1 A1B1 C1 Q 2 A1B1 C1 R 3 A2B2 C2 P 4 A2B2 C2 Q 5 A2B2 C2 R 6 A3B3 C3 P 7 A3B3 C3 Q 8 A3B3 C3 R Another similar idea is repeat indices and duplicated rows by DataFrame.loc: L = ['P','Q','R'] new_df = (df.loc[df.index.repeat(3)] .assign(col_d = np.tile(L, len(df))) .reset_index(drop=True)) print (new_df) col_acol_b col_c col_d 0 A1B1 C1 P 1 A1B1 C1 Q 2 A1B1 C1 R 3 A2B2 C2 P 4 A2B2 C2 Q 5 A2B2 C2 R 6 A3B3 C3 P 7 A3B3 C3 Q 8 A3B3 C3 R EDIT: L = ['P','Q','R','S'] new_df = (pd.DataFrame(np.repeat(df.values, len(L), axis=0), columns=df.columns) .assign(col_d = np.tile(L, len(df)), col_c = lambda x: x['col_c'].mask(x['col_d'].eq('S'), 'T'))) print (new_df) col_acol_b col_c col_d 0 A1B1 C1 P 1 A1B1 C1 Q 2 A1B1 C1 R 3 A1B1 T S 4 A2B2 C2 P 5 A2B2 C2 Q 6 A2B2 C2 R 7 A2B2 T S 8 A3B3 C3 P 9 A3B3 C3 Q 10 A3B3 C3 R 11 A3B3 T S
4
3
72,134,364
2022-5-5
https://stackoverflow.com/questions/72134364/aiflow-2-xcom-in-task-groups
I have two tasks inside a TaskGroup that need to pull xcom values to supply the job_flow_id and step_id. Here's the code: with TaskGroup('execute_my_steps') as execute_my_steps: config = {some dictionary} dependencies = {another dictionary} task_id = 'execute_spark_job_step' task_name = 'spark_job' add_step = EmrAddStepsOperator( task_id=task_id, job_flow_id="{{ task_instance.xcom_pull(dag_id='my_dag', task_ids='emr', key='return_value') }}", steps=create_emr_step(args=config, d=dependencies), aws_conn_id='aws_default', retries=3, dag=dag ) wait_for_step = EmrStepSensor( task_id='wait_for_' + task_name + '_step', job_flow_id="{{ task_instance.xcom_pull(dag_id='my_dag', task_ids='emr', key='return_value') }}", step_id="{{ task_instance.xcom_pull(dag_id='my_dag', task_ids='" + task_id + "', key='return_value') }}", retries=3, dag=dag, mode='reschedule' ) add_step >> wait_for_step The problem is the step_id does not render correctly. The wait_for_step value in the UI rendered template shows as 'None', however, the xcom return_value for execute_spark_job_step is there (this is the emr step_id). wait_for_step rendered template: execute_spark_job_step xcom: When I remove the TaskGroup, it renders fine and the step waits until the job enters the completed state. I need this to be in a task group because I will be looping through a larger config file and creating multiple steps. Why doesn't this work? Do I need a nested TaskGroup? I tried using a TaskGroup without the context manager and still no luck.
TL;DR: Your issue is happening because the id is not task_id it's group_id.task_id so your code should be: task_ids=f"execute_my_steps.{ task_id }" => step_id="{{ task_instance.xcom_pull(dag_id='my_dag', task_ids=f"execute_my_steps.{ task_id }", key='return_value') }}", The explanation why it happens: When task is assigned to TaskGroup the id of the task is no longer the task_id but it becomes group_id.task_id to reflect this relationship. In Airflow task_id is unique but when you use TaskGroup you can set the same task_id in different TaskGroups. If this behavior is not something that you want, you can disable it by setting prefix_group_id=False in your TaskGroup: with TaskGroup( group_id='execute_my_steps', prefix_group_id=False ) as execute_my_steps: By doing so your code will work without changes. The task_id will simply be task_id without the group_id prefix. Note that this also means that it's up to you to make sure you don't have duplicated task_ids in your DAG.
4
10
72,135,183
2022-5-6
https://stackoverflow.com/questions/72135183/how-to-create-a-new-data-frame-by-using-substring-and-matching-column-values-in
Suppose I have a simple dataframe where I have four features as food, kitchen, city, and detail. d = {'Food': ['P1|0', 'P2', 'P3|45', 'P1', 'P2', 'P4', 'P1|1', 'P3|7', 'P5', 'P1||23'], 'Kitchen' : ['L1', 'L2','L9', 'L4','L5', 'L6','L1', 'L9','L10', 'L1'], 'City': ['A', 'A', 'A', 'B', 'B','B', 'C', 'C', 'C','D'], 'Detail': ['d1', 'd2', 'd3', 'd4', 'd5', 'd6', 'd7', 'd8', 'd9','d0']} df = pd.DataFrame(data=d) My goal is to use the substring of Food value without | and create a new dataframe where I can see which kitchens do produce similar foods. The way I define similarity is that substring should match with respect to Kitchen. df['Food'] = df['Food'].apply(str) df.insert(0,'subFood',df['Food'].str.split('|').str[0]) df.iloc[: , :2] subFood Food 0 P1 P1|0 1 P2 P2 2 P3 P3|45 3 P1 P1 4 P2 P2 5 P4 P4 6 P1 P1|1 7 P3 P3|7 8 P5 P5 9 P1 P1||23 To do so, I use merge function together with query. df.merge(df, on=['subFood', 'Kitchen'], suffixes=('_1', '_2')).query('City_1 != City_2') subFood Food_1 Kitchen City_1 Detail_1 Food_2 City_2 Detail_2 1 P1 P1|0 L1 A d1 P1|1 C d7 2 P1 P1|0 L1 A d1 P1||23 D d0 3 P1 P1|1 L1 C d7 P1|0 A d1 5 P1 P1|1 L1 C d7 P1||23 D d0 6 P1 P1||23 L1 D d0 P1|0 A d1 7 P1 P1||23 L1 D d0 P1|1 C d7 11 P3 P3|45 L9 A d3 P3|7 C d8 12 P3 P3|7 L9 C d8 P3|45 A d3 I got stuck here. My intention is to have a dataframe that should look similar to the dataframe shown below. I appreciate any help and / or hint. subFood Food_1 Food_2 Kitchen City Detail P1 P1|0 P1|0 L1 A d1 P1 P1|0 P1|1 L1 C d1 ....
IIUC, you can split each row into two rows by combining the city names to a list and then using explode: merged = df.merge(df, on=["subFood","Kitchen"], suffixes=("_1","_2")).query("City_1 != City_2") merged["City"] = merged[["City_1","City_2"]].to_numpy().tolist() output = merged.drop(["City_1","City_2","Detail_2"],axis=1).explode("City").rename(columns={"Detail_1":"Detail"}) >>> output subFood Food_1 Kitchen Detail Food_2 City 1 P1 P1|0 L1 d1 P1|1 A 1 P1 P1|0 L1 d1 P1|1 C 2 P1 P1|0 L1 d1 P1||23 A 2 P1 P1|0 L1 d1 P1||23 D 3 P1 P1|1 L1 d7 P1|0 C 3 P1 P1|1 L1 d7 P1|0 A 5 P1 P1|1 L1 d7 P1||23 C 5 P1 P1|1 L1 d7 P1||23 D 6 P1 P1||23 L1 d0 P1|0 D 6 P1 P1||23 L1 d0 P1|0 A 7 P1 P1||23 L1 d0 P1|1 D 7 P1 P1||23 L1 d0 P1|1 C 11 P3 P3|45 L9 d3 P3|7 A 11 P3 P3|45 L9 d3 P3|7 C 12 P3 P3|7 L9 d8 P3|45 C 12 P3 P3|7 L9 d8 P3|45 A
4
1
72,133,537
2022-5-5
https://stackoverflow.com/questions/72133537/determining-the-validity-of-a-multi-hot-encoding
Suppose I have N items and a multi-hot vector of values {0, 1} that represents inclusion of these items in a result: N = 4 # items 1 and 3 will be included in the result vector = [0, 1, 0, 1] # item 2 will be included in the result vector = [0, 0, 1, 0] I'm also provided a matrix of conflicts which indicates which items cannot be included in the result at the same time: conflicts = [ [0, 1, 1, 0], # any result that contains items 1 AND 2 is invalid [0, 1, 1, 1], # any result that contains AT LEAST 2 items from {1, 2, 3} is invalid ] Given this matrix of conflicts, we can determine the validity of the earlier vectors: # invalid as it triggers conflict 1: [0, 1, 1, 1] vector = [0, 1, 0, 1] # valid as it triggers no conflicts vector = [0, 0, 1, 0] A naive solution to detect whether a given vector is "valid" (i.e. does not trigger any conflicts) may be done via a dot product and summation operation in numpy: violation = np.dot(conflicts, vector) is_valid = np.max(violation) <= 1 Are there are more efficient ways to perform this operation, perhaps either via np.einsum or by bypassing numpy arrays entirely in favour of bit manipulation? We assume that the number of vectors being checked can be very large (e.g. up to 2^N if we evaluate all possibilities) but that only one vector is likely being checked at a time (to avoid generating a matrix of shape up to (2^N, N) as input).
TL;DR: you can use Numba to optimize np.dot to only operate only on binary values. More specifically, you can perform SIMD-like operations on 8 bytes at once using 64-bit views. Converting lists to arrays First of all, the lists can be efficiently converted to relatively-compact arrays using this approach: vector = np.fromiter(vector, np.uint8) conflicts = np.array([np.fromiter(conflicts[i], np.uint8) for i in range(len(conflicts))]) This is faster than using the automatic Numpy conversion or np.array (there is less check to perform in the Numpy code internally and Numpy, Numpy know what type of array to build and the resulting one is smaller in memory and thus faster to fill). This step can be used to speed up your np.dot-based solution. If the input are already a Numpy array, then check they are of type np.uint8 or np.int8. Otherwise, please cast them to such type using conflits = conflits.astype(np.uint8) for example. First try Then, one solution could be to use np.packbits to pack the input binary values much as possible in an array of bits in memory, and then perform logical ANDs. But it turns out that np.packbits is pretty slow. Thus, this solution is not a good idea in the end. In fact, any solution creating temporary arrays with a shape similar to conflicts will be slow since writing such an array in memory is generally slower than np.dot (which read conflicts from memory once). Using Numba Since np.dot is pretty well optimized, the only solution to defeat it is to use an optimized native code. Numba can be used to generate a native executable code at runtime from a Numpy-based Python code thanks to a just-in-time compiler. The idea is to perform a logical ANDs between vector and rows of conflicts per block. Conflict are check for each block so to stop the computation as early as possible. Blocks can be efficiently compared by groups of 8 octets by comparing the uint64 views of the two arrays (in a SIMD-friendly way). import numba as nb @nb.njit('bool_(uint8[::1], uint8[:,::1])') def check_valid(vector, conflicts): n, m = conflicts.shape assert vector.size == m for i in range(n): block_size = 128 # In the range: 8,16,...,248 conflicts_row = conflicts[i,:] gsum = 0 # Global sum of conflicts m_limit = m // block_size * block_size for j in range(0, m_limit, block_size): vector_block = vector[j:j+block_size].view(np.uint64) conflicts_block = conflicts_row[j:j+block_size].view(np.uint64) # Matching lsum = np.uint64(0) # 8 local sums of conflicts for k in range(block_size//8): lsum += vector_block[k] & conflicts_block[k] # Trick to perform the reduction of all the bytes in lsum lsum += lsum >> 32 lsum += lsum >> 16 lsum += lsum >> 8 gsum += lsum & 0xFF # Check if there is a conflict if gsum >= 2: return False # Remaining part for j in range(m_limit, m): gsum += vector[j] & conflicts_row[j] if gsum >= 2: return False return True Results This is about 9 times faster than np.dot on my machine for a large conflicts array of shape (16, 65536) (without conflicts). The time to convert lists is not included in both cases. When there are conflicts, the provided solution is much faster since it can early stop the computation. Theoretically, the computation should be even faster, but the Numba JIT do not succeed to vectorize the loop using SIMD instructions. That being said, it seems the same issue appears for np.dot. If the arrays are even bigger, you can parallelize the computation of the blocks (at the expense of a slower computation if the function return False).
5
1
72,130,023
2022-5-5
https://stackoverflow.com/questions/72130023/numba-parallel-causing-incorrect-results-in-a-for-loop-i-cant-pinpoint-the-iss
So I have what appears to be a perfectly acceptable loop to make parallel. But when I pass it to Numba parallel, it always gives incorrect results. All that happens in the loop is an input matrix has one element set to 0, matrix multiplication occurs and populates a new matrix, then the element that was set to 0 is set back to its original value. It would appear the array a is getting modified in each dispatch of Numba, so I tried copying a to another variable inside the loop, modifying the copy only, yet obtain the same incorrect results (not shown). Here is a minimal example. I just don't see what the issue is, or how to fix it: import numpy as np from scipy.stats import random_correlation import numba as nb def myfunc(a, corr): b = np.zeros(a.shape[0]) for i in range(b.shape[0]): temp = a[i] a[i] = 0 b[i] = a@[email protected] a[i] = temp return b @nb.njit(parallel=True) def numbafunc(a, corr): b = np.zeros(a.shape[0]) for i in nb.prange(b.shape[0]): temp = a[i] a[i] = 0 b[i] = a@[email protected] a[i] = temp return b if __name__ == '__main__': a = np.random.rand(10) corr = random_correlation.rvs(eigs=[2,2,1,1,1,1,0.5,0.5,0.5,0.5]) b_1 = myfunc(a, corr) b_2 = numbafunc(a, corr) # check if serial and Numba results match off the same inputs print(np.isclose(b_1,b_2)) # double check the original function returns the same results again.. b_1_check = myfunc(a, corr) print(np.isclose(b_1, b_1_check)) Returns all false values, or at least 9/10 are false... Can anyone pinpoint which part of the code is problematic for parallelization? It looks fine to me. Much appreciated!
There is a race condition in numbafunc. Indeed, a[i] = 0 modifies the array a shared between multiple threads reading/writing a for different i values. Storing the value in temp to restore it later only works in sequential, but not in parallel since threads can read a at any time. To solve this issue, each thread should operate on its own copy of a: @nb.njit(parallel=True) def numbafunc(a, corr): b = np.zeros(a.shape[0]) for i in nb.prange(b.shape[0]): c = a.copy() c[i] = 0.0 b[i] = c @ corr @ c.T return b
4
4
72,131,251
2022-5-5
https://stackoverflow.com/questions/72131251/mypy-gives-incompatible-default-for-argument-when-dict-param-defaults-none
I understand that a Dict parameter in a Python function is best set to a default of None. However, mypy seems to disagree: def example(self, mydict: Dict[int, str] = None): return mydict.get(1) This results in the mypy error: error: Incompatible default for argument "synonyms" (default has type "None", argument has type "Dict[Any, Any]") [assignment] mypy on the other hand is fine with this: def example(self, myDict: Dict[int, str] = {}): But pylint complains: W0102: Dangerous default value {} as argument (dangerous-default-value) According to this SO question (https://stackoverflow.com/a/26320917), the default should be None but that will not work with mypy. What is the option that will satisfy both pylint and mypy? Thanks. SOLUTION (based on comments): class Example: """Example""" def __init__(self): self.d: Optional[Dict[int, str]] = None def example(self, mydict: Optional[Dict[int, str]] = None): """example""" self.d = mydict Part of the issue (not mentioned originally) I had was assigning back to a previously inited variable, this works.
I think the type should be "dict or None", so a Union of the two: def example(self, mydict: Union[Dict[int, str], None] = None): return mydict.get(1)
9
10
72,126,748
2022-5-5
https://stackoverflow.com/questions/72126748/what-is-the-difference-between-prophet-package-and-fbprophet-in-python
I googled how to install the fbprophet package, but the top result is how to install prophet. What is the difference between the two packages? Are they the same?
It's by the same devs. Seems it was just a name change. Prophet is on PyPI, so you can use pip to install it. From v0.6 onwards, Python 2 is no longer supported. As of v1.0, the package name on PyPI is "prophet"; prior to v1.0 it was "fbprophet". https://pythonlang.dev/repo/facebook-prophet/
7
8
72,119,316
2022-5-4
https://stackoverflow.com/questions/72119316/generate-jwt-token-signed-with-rsa-key-in-python
I am trying to convert this java code for generating JWT token in python. String privateKeyContent = privateKey .replaceAll(Definitions.ApiGeneral.LINE_BREAKER, "") .replace(Definitions.AuthProperty.PRIVATE_KEY_START, "") .replace(Definitions.AuthProperty.PRIVATE_KEY_END, ""); PKCS8EncodedKeySpec keySpecPKCS8 = new PKCS8EncodedKeySpec(Base64.getDecoder().decode(privateKeyContent)); KeyFactory kf = KeyFactory.getInstance(Definitions.AuthProperty.RSA_KEY_FACTORY); PrivateKey privKey = kf.generatePrivate(keySpecPKCS8); String jwtAudUrl = System.getenv(Definitions.IamProperty.IAM_URL_KEY) + System.getenv(Definitions.IamProperty.JWT_AUD_URI_KEY); String jwtToken = Jwts.builder() .setAudience(jwtAudUrl) .setSubject(serviceId) .setIssuer(serviceId) .setExpiration(new Date(new Date().getTime() + TimeUnit.MINUTES.toMillis(Definitions.AuthProperty.JWT_TOKEN_EXPIRY_IN_MINUTES))) .signWith(privKey) .compact(); Python: import jwt serviceID = "abc" secret = '-----BEGIN RSA PRIVATE KEY-----MIIEogIBAAKCAQEAhstYtRbkgQkFwlVr8QjSCQqqRTDMKWHdIGRYBpXcQmvKfagId9nBA2Ygh7cOrT9g8MhxYo8U1jYmPQpv6gf3LgO/J0qspLdaAhZP6LusA/HHJBR7kjTXBsLcsEDyd8S0UioBYP3DLvtWhGIR2f4o7SH1TlE96tldV6FZKGO2NHsJrJwTd+ym0AeZe0b7QZLe43LBCTLdqk05U34jrknJliSAEbGqYg4h6nrJsKBC/0pmiQ9ptD1N/Kl4bqffMWIbZq2bPP6jFrmBLe+7yTeVMKltVbJZys4nHhyYngBtbAxynXeB2tpE8If7cK75fj42MlFgquEiEZZVSzNNmrmPOwIDAQABAoH/B18Xes/Fr0jPB9GkFYpl8hijNyV0BM9VSHA0YCfR49ABQt3tmKBP7d+n58QbCV5t7r0Hdlxcx1ouvSfU9vd4jQunaH6s8lUUlwihVhjtT0npmg+EsnoxSC1f5EOo/uPC+LtTV/qIsgkMsjCqyUEc+9rfj2jh+fXpJOGt/od1b2k2xs84MsXmSF/As7GYRdw+FLbkN64R/SGmv3NLtQdg5uvBKLuKvtQWIJBuPqgKOsJWaCVO0XaoUDQeav/nTfP0ntmF0QH9JtXYzBldhGq2FPVQRUaCuJ4YPEpXlD3FptQBlX/Wu7wXbdwDz3qyXbGqSkiaR3gN+QR+IgjG3oQxAoGBAML5M4Dg0S/yobHuCRcvpngraXGBWnCTaD0mMh/cosV0HBNdZbglwVpG8Agz9BEYIrDM777HFuB+lWUvGNMR45rObPcDqn7Qj8Uwnj775Bg0Sx09MCBds6IGce/92uPUsHPx4pj8dzCj1s3cfUKO+EaUE33bnFXcMCGh/0x0sIVXAoGBALD8H4Aph2OPQK9OOHb3ULDvrmMPrpe3xv5iSzfLt7xBIP2G0Wl+q0hzEtdEOUZGQemOtyuV5t9Jwwfh9uH9jAhk9OEvoNg3F3mQl6hjmHPd1xUNvRTqwnV+VvQnw7Aeq2TZMcAKwbXf3+p430wMsxR6m0Y7rGi0FIxWbILp0RK9AoGBAIfiPBXnGYOsOysRtb42FHQN9WgI+eoZof10IFz6XWr12Bda8Wicz5vGcsWUx9YeFxdXTQOOJ5CASEiDwW5hOlqK4YBqSqolWv3YO4Gz9i00TOFs4py8EVSr3z6ekq5UbkHwY7exxLPei/dfYuE/WSN/UfJWWyev1M+r4oz7iobzAoGAKe8S56LvWT+P6/l0l3txuvqPLxmAHKKGm69ecxHprskfr/JJm91PaBMb27VmfKgY5eXSsJkL4svvUebQQCt7CmIhQ1mtmo0zGrKPvG4cqRde5rYintogyQXuRFtHmmsp4PM1PnNOAnHQ9BU/kx1PMQL712A8MXK5i6bOfxY3W2ECgYEAp9cw3NeMSy/WcXyZoyNFUdEQiHTJGPBtELjHWRnjMU1454EkWYYPiYXUnElfxP6InxHgMNbGVsd1BUEhXOdSUS5bO/WBOOPqSh6MIGfKFuMN/9SI/Y/UZKltCD1CboTvDfmkD+opkLM6YZpW9CRT7Szn7ivdFr5KqGQZUZOwn3k=-----END RSA PRIVATE KEY-----' due_date = datetime.now() + timedelta(minutes=10) header = {"alg": "RS256"} expiry = int(due_date.timestamp()) payload = {"iss": serviceID, "sub": serviceID, "exp": expiry, "aud": iam_url + "/oauth2/access_token"} priv_rsakey = serialization.load_pem_private_key(secret.encode('utf8'), password=None, backend=default_backend()) token=jwt.encode(payload, priv_rsakey, algorithm='RS256') However, I keep on getting this error : ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=503841036, lib=60, reason=524556, reason_text=b'error:1E08010C:DECODER routines::unsupported')]) Can someone please help me with this ?
The issue is precisely identified in the error message: Your private key is incorrectly formatted. A PEM encoded key consists of the Base64 encoded body, which contains a line break after every 64 characters, and a header and footer on separate lines. Your key is missing the line breaks. load_pem_private_key() expects at least header and footer on separate lines, but is tolerant about line breaks in the body, i.e. they are optional. So you have to pass your key e.g. like this: secret = '-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCAQEAhstYtRbkgQkFwlVr8QjSCQqqRTDMKWHdIGRYBpXcQmvKfagId9nBA2Ygh7cOrT9g8MhxYo8U1jYmPQpv6gf3LgO/J0qspLdaAhZP6LusA/HHJBR7kjTXBsLcsEDyd8S0UioBYP3DLvtWhGIR2f4o7SH1TlE96tldV6FZKGO2NHsJrJwTd+ym0AeZe0b7QZLe43LBCTLdqk05U34jrknJliSAEbGqYg4h6nrJsKBC/0pmiQ9ptD1N/Kl4bqffMWIbZq2bPP6jFrmBLe+7yTeVMKltVbJZys4nHhyYngBtbAxynXeB2tpE8If7cK75fj42MlFgquEiEZZVSzNNmrmPOwIDAQABAoH/B18Xes/Fr0jPB9GkFYpl8hijNyV0BM9VSHA0YCfR49ABQt3tmKBP7d+n58QbCV5t7r0Hdlxcx1ouvSfU9vd4jQunaH6s8lUUlwihVhjtT0npmg+EsnoxSC1f5EOo/uPC+LtTV/qIsgkMsjCqyUEc+9rfj2jh+fXpJOGt/od1b2k2xs84MsXmSF/As7GYRdw+FLbkN64R/SGmv3NLtQdg5uvBKLuKvtQWIJBuPqgKOsJWaCVO0XaoUDQeav/nTfP0ntmF0QH9JtXYzBldhGq2FPVQRUaCuJ4YPEpXlD3FptQBlX/Wu7wXbdwDz3qyXbGqSkiaR3gN+QR+IgjG3oQxAoGBAML5M4Dg0S/yobHuCRcvpngraXGBWnCTaD0mMh/cosV0HBNdZbglwVpG8Agz9BEYIrDM777HFuB+lWUvGNMR45rObPcDqn7Qj8Uwnj775Bg0Sx09MCBds6IGce/92uPUsHPx4pj8dzCj1s3cfUKO+EaUE33bnFXcMCGh/0x0sIVXAoGBALD8H4Aph2OPQK9OOHb3ULDvrmMPrpe3xv5iSzfLt7xBIP2G0Wl+q0hzEtdEOUZGQemOtyuV5t9Jwwfh9uH9jAhk9OEvoNg3F3mQl6hjmHPd1xUNvRTqwnV+VvQnw7Aeq2TZMcAKwbXf3+p430wMsxR6m0Y7rGi0FIxWbILp0RK9AoGBAIfiPBXnGYOsOysRtb42FHQN9WgI+eoZof10IFz6XWr12Bda8Wicz5vGcsWUx9YeFxdXTQOOJ5CASEiDwW5hOlqK4YBqSqolWv3YO4Gz9i00TOFs4py8EVSr3z6ekq5UbkHwY7exxLPei/dfYuE/WSN/UfJWWyev1M+r4oz7iobzAoGAKe8S56LvWT+P6/l0l3txuvqPLxmAHKKGm69ecxHprskfr/JJm91PaBMb27VmfKgY5eXSsJkL4svvUebQQCt7CmIhQ1mtmo0zGrKPvG4cqRde5rYintogyQXuRFtHmmsp4PM1PnNOAnHQ9BU/kx1PMQL712A8MXK5i6bOfxY3W2ECgYEAp9cw3NeMSy/WcXyZoyNFUdEQiHTJGPBtELjHWRnjMU1454EkWYYPiYXUnElfxP6InxHgMNbGVsd1BUEhXOdSUS5bO/WBOOPqSh6MIGfKFuMN/9SI/Y/UZKltCD1CboTvDfmkD+opkLM6YZpW9CRT7Szn7ivdFr5KqGQZUZOwn3k=\n-----END RSA PRIVATE KEY-----' or secret = '''-----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAhstYtRbkgQkFwlVr8QjSCQqqRTDMKWHdIGRYBpXcQmvKfagId9nBA2Ygh7cOrT9g8MhxYo8U1jYmPQpv6gf3LgO/J0qspLdaAhZP6LusA/HHJBR7kjTXBsLcsEDyd8S0UioBYP3DLvtWhGIR2f4o7SH1TlE96tldV6FZKGO2NHsJrJwTd+ym0AeZe0b7QZLe43LBCTLdqk05U34jrknJliSAEbGqYg4h6nrJsKBC/0pmiQ9ptD1N/Kl4bqffMWIbZq2bPP6jFrmBLe+7yTeVMKltVbJZys4nHhyYngBtbAxynXeB2tpE8If7cK75fj42MlFgquEiEZZVSzNNmrmPOwIDAQABAoH/B18Xes/Fr0jPB9GkFYpl8hijNyV0BM9VSHA0YCfR49ABQt3tmKBP7d+n58QbCV5t7r0Hdlxcx1ouvSfU9vd4jQunaH6s8lUUlwihVhjtT0npmg+EsnoxSC1f5EOo/uPC+LtTV/qIsgkMsjCqyUEc+9rfj2jh+fXpJOGt/od1b2k2xs84MsXmSF/As7GYRdw+FLbkN64R/SGmv3NLtQdg5uvBKLuKvtQWIJBuPqgKOsJWaCVO0XaoUDQeav/nTfP0ntmF0QH9JtXYzBldhGq2FPVQRUaCuJ4YPEpXlD3FptQBlX/Wu7wXbdwDz3qyXbGqSkiaR3gN+QR+IgjG3oQxAoGBAML5M4Dg0S/yobHuCRcvpngraXGBWnCTaD0mMh/cosV0HBNdZbglwVpG8Agz9BEYIrDM777HFuB+lWUvGNMR45rObPcDqn7Qj8Uwnj775Bg0Sx09MCBds6IGce/92uPUsHPx4pj8dzCj1s3cfUKO+EaUE33bnFXcMCGh/0x0sIVXAoGBALD8H4Aph2OPQK9OOHb3ULDvrmMPrpe3xv5iSzfLt7xBIP2G0Wl+q0hzEtdEOUZGQemOtyuV5t9Jwwfh9uH9jAhk9OEvoNg3F3mQl6hjmHPd1xUNvRTqwnV+VvQnw7Aeq2TZMcAKwbXf3+p430wMsxR6m0Y7rGi0FIxWbILp0RK9AoGBAIfiPBXnGYOsOysRtb42FHQN9WgI+eoZof10IFz6XWr12Bda8Wicz5vGcsWUx9YeFxdXTQOOJ5CASEiDwW5hOlqK4YBqSqolWv3YO4Gz9i00TOFs4py8EVSr3z6ekq5UbkHwY7exxLPei/dfYuE/WSN/UfJWWyev1M+r4oz7iobzAoGAKe8S56LvWT+P6/l0l3txuvqPLxmAHKKGm69ecxHprskfr/JJm91PaBMb27VmfKgY5eXSsJkL4svvUebQQCt7CmIhQ1mtmo0zGrKPvG4cqRde5rYintogyQXuRFtHmmsp4PM1PnNOAnHQ9BU/kx1PMQL712A8MXK5i6bOfxY3W2ECgYEAp9cw3NeMSy/WcXyZoyNFUdEQiHTJGPBtELjHWRnjMU1454EkWYYPiYXUnElfxP6InxHgMNbGVsd1BUEhXOdSUS5bO/WBOOPqSh6MIGfKFuMN/9SI/Y/UZKltCD1CboTvDfmkD+opkLM6YZpW9CRT7Szn7ivdFr5KqGQZUZOwn3k= -----END RSA PRIVATE KEY-----''' With this change the code works (after adding the missing iam_url and the missing import statements). Note that PKCS8EncodedKeySpec in the Java code expects a DER encoded private key in PKCS#8 format, while in the Python code a PEM encoded private key in PKCS#1 format is applied. A DER encoded key results from a PEM encoded key by removing header, footer and all line breaks, and Base64 decoding the rest. The Cryptography library supports the import of a DER encoded private key with load_der_private_key(): import base64 secret = base64.b64decode('MIIEogIBAAKCAQEAhstYtRbkgQkFwlVr8QjSCQqqRTDMKWHdIGRYBpXcQmvKfagId9nBA2Ygh7cOrT9g8MhxYo8U1jYmPQpv6gf3LgO/J0qspLdaAhZP6LusA/HHJBR7kjTXBsLcsEDyd8S0UioBYP3DLvtWhGIR2f4o7SH1TlE96tldV6FZKGO2NHsJrJwTd+ym0AeZe0b7QZLe43LBCTLdqk05U34jrknJliSAEbGqYg4h6nrJsKBC/0pmiQ9ptD1N/Kl4bqffMWIbZq2bPP6jFrmBLe+7yTeVMKltVbJZys4nHhyYngBtbAxynXeB2tpE8If7cK75fj42MlFgquEiEZZVSzNNmrmPOwIDAQABAoH/B18Xes/Fr0jPB9GkFYpl8hijNyV0BM9VSHA0YCfR49ABQt3tmKBP7d+n58QbCV5t7r0Hdlxcx1ouvSfU9vd4jQunaH6s8lUUlwihVhjtT0npmg+EsnoxSC1f5EOo/uPC+LtTV/qIsgkMsjCqyUEc+9rfj2jh+fXpJOGt/od1b2k2xs84MsXmSF/As7GYRdw+FLbkN64R/SGmv3NLtQdg5uvBKLuKvtQWIJBuPqgKOsJWaCVO0XaoUDQeav/nTfP0ntmF0QH9JtXYzBldhGq2FPVQRUaCuJ4YPEpXlD3FptQBlX/Wu7wXbdwDz3qyXbGqSkiaR3gN+QR+IgjG3oQxAoGBAML5M4Dg0S/yobHuCRcvpngraXGBWnCTaD0mMh/cosV0HBNdZbglwVpG8Agz9BEYIrDM777HFuB+lWUvGNMR45rObPcDqn7Qj8Uwnj775Bg0Sx09MCBds6IGce/92uPUsHPx4pj8dzCj1s3cfUKO+EaUE33bnFXcMCGh/0x0sIVXAoGBALD8H4Aph2OPQK9OOHb3ULDvrmMPrpe3xv5iSzfLt7xBIP2G0Wl+q0hzEtdEOUZGQemOtyuV5t9Jwwfh9uH9jAhk9OEvoNg3F3mQl6hjmHPd1xUNvRTqwnV+VvQnw7Aeq2TZMcAKwbXf3+p430wMsxR6m0Y7rGi0FIxWbILp0RK9AoGBAIfiPBXnGYOsOysRtb42FHQN9WgI+eoZof10IFz6XWr12Bda8Wicz5vGcsWUx9YeFxdXTQOOJ5CASEiDwW5hOlqK4YBqSqolWv3YO4Gz9i00TOFs4py8EVSr3z6ekq5UbkHwY7exxLPei/dfYuE/WSN/UfJWWyev1M+r4oz7iobzAoGAKe8S56LvWT+P6/l0l3txuvqPLxmAHKKGm69ecxHprskfr/JJm91PaBMb27VmfKgY5eXSsJkL4svvUebQQCt7CmIhQ1mtmo0zGrKPvG4cqRde5rYintogyQXuRFtHmmsp4PM1PnNOAnHQ9BU/kx1PMQL712A8MXK5i6bOfxY3W2ECgYEAp9cw3NeMSy/WcXyZoyNFUdEQiHTJGPBtELjHWRnjMU1454EkWYYPiYXUnElfxP6InxHgMNbGVsd1BUEhXOdSUS5bO/WBOOPqSh6MIGfKFuMN/9SI/Y/UZKltCD1CboTvDfmkD+opkLM6YZpW9CRT7Szn7ivdFr5KqGQZUZOwn3k=') priv_rsakey = serialization.load_der_private_key(secret, password=None, backend=default_backend()) load_pem_private_key() and load_der_private_key() support both PKCS#8 and PKCS#1 format.
4
6
72,121,390
2022-5-5
https://stackoverflow.com/questions/72121390/how-to-use-jupyterlab-in-visual-studio-code
is there a way to use JupyterLab in VS Code? I know that VS Code provides the Jupyter Notebook extension. However, I need to connect to another server remotely...... Any guidance will be appreciated!
You can offload intensive computation in a Jupyter Notebook to other computers by connecting to a remote Jupyter server. Once connected, code cells run on the remote server rather than the local computer. To connect to a remote Jupyter server: Select the Jupyter Server: local button in the global Status bar or run the Jupyter: Specify local or remote Jupyter server for connections command from the Command Palette (Ctrl+Shift+P). When prompted to Pick how to connect to Jupyter, select Existing: Specify the URI of an existing server. When prompted to Enter the URI of a Jupyter server, provide the server's URI (hostname) with the authentication token included with a URL parameter. (If you start the server in the VS Code terminal with an authentication token enabled, the URL with the token typically appears in the terminal output from where you can copy it.) Alternatively, you can specify a username and password after providing the URI. For guidance about securing a notebook server, refer to the Jupyter documentation.
8
7
72,118,665
2022-5-4
https://stackoverflow.com/questions/72118665/particle-detection-with-python-opencv
I'm looking for a proper solution how to count particles and measure their sizes in this image: In the end I have to obtain the lists of particles' coordinates and area squares. After some search on the internet I realized there are 3 approaches for particles detection: blobs Contours connectedComponentsWithStats Looking at different projects I assembled some code with the mix of it. import pylab import cv2 import numpy as np Gaussian blurring and thresholding original_image = cv2.imread(img_path) img = original_image img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img = cv2.GaussianBlur(img, (5, 5), 0) img = cv2.blur(img, (5, 5)) img = cv2.medianBlur(img, 5) img = cv2.bilateralFilter(img, 6, 50, 50) max_value = 255 adaptive_method = cv2.ADAPTIVE_THRESH_GAUSSIAN_C threshold_type = cv2.THRESH_BINARY block_size = 11 img_thresholded = cv2.adaptiveThreshold(img, max_value, adaptive_method, threshold_type, block_size, -3) filter small objects min_size = 4 nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(img, connectivity=8) sizes = stats[1:, -1] nb_components = nb_components - 1 # for every component in the image, you keep it only if it's above min_size for i in range(0, nb_components): if sizes[i] < min_size: img[output == i + 1] = 0 generation of Contours for filling holes and measurements. pos_list and size_list is what we were looking for contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) pos_list = [] size_list = [] for i in range(len(contours)): area = cv2.contourArea(contours[i]) size_list.append(area) (x, y), radius = cv2.minEnclosingCircle(contours[i]) pos_list.append((int(x), int(y))) for the self-check, if we plot these coordinates over the original image pts = np.array(pos_list) pylab.figure(0) pylab.imshow(original_image) pylab.scatter(pts[:, 0], pts[:, 1], marker="x", color="green", s=5, linewidths=1) pylab.show() We might get something like the following: And... I'm not really satisfied with the results. Some clearly visible particles are not included, on the other side, some doubt fluctuations of intensity have been counted. I'm playing now with different filters' settings, but the feeling is it's wrong. If someone knows how to improve my solution, please share.
Since the particles are in white and the background in black, we can use Kmeans Color Quantization to segment the image into two groups with cluster=2. This will allow us to easily distinguish between particles and the background. Since the particles may be very tiny, we should try to avoid blurring, dilating, or any morphological operations which may alter the particle contours. Here's an approach: Kmeans color quantization. We perform Kmeans with two clusters, grayscale, then Otsu's threshold to obtain a binary image. Filter out super tiny noise. Next we find contours, remove tiny specs of noise using contour area filtering, and collect each particle (x, y) coordinate and its area. We remove tiny particles on the binary mask by "filling in" these contours to effectively erase them. Apply mask onto original image. Now we bitwise-and the filtered mask onto the original image to highlight the particle clusters. Kmeans with clusters=2 Result Number of particles: 204 Average particle size: 30.537 Code import cv2 import numpy as np import pylab # Kmeans def kmeans_color_quantization(image, clusters=8, rounds=1): h, w = image.shape[:2] samples = np.zeros([h*w,3], dtype=np.float32) count = 0 for x in range(h): for y in range(w): samples[count] = image[x][y] count += 1 compactness, labels, centers = cv2.kmeans(samples, clusters, None, (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10000, 0.0001), rounds, cv2.KMEANS_RANDOM_CENTERS) centers = np.uint8(centers) res = centers[labels.flatten()] return res.reshape((image.shape)) # Load image image = cv2.imread('1.png') original = image.copy() # Perform kmeans color segmentation, grayscale, Otsu's threshold kmeans = kmeans_color_quantization(image, clusters=2) gray = cv2.cvtColor(kmeans, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] # Find contours, remove tiny specs using contour area filtering, gather points points_list = [] size_list = [] cnts, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:] AREA_THRESHOLD = 2 for c in cnts: area = cv2.contourArea(c) if area < AREA_THRESHOLD: cv2.drawContours(thresh, [c], -1, 0, -1) else: (x, y), radius = cv2.minEnclosingCircle(c) points_list.append((int(x), int(y))) size_list.append(area) # Apply mask onto original image result = cv2.bitwise_and(original, original, mask=thresh) result[thresh==255] = (36,255,12) # Overlay on original original[thresh==255] = (36,255,12) print("Number of particles: {}".format(len(points_list))) print("Average particle size: {:.3f}".format(sum(size_list)/len(size_list))) # Display cv2.imshow('kmeans', kmeans) cv2.imshow('original', original) cv2.imshow('thresh', thresh) cv2.imshow('result', result) cv2.waitKey()
6
8
72,110,565
2022-5-4
https://stackoverflow.com/questions/72110565/i-want-to-create-python-logo-using-turtle-module
I want to create Python LOGO. So I import Turtle Module into my code. My problem is it creates only half Python LOGO and then throws errors. How can I resolve it? Python Logo Using Python Turtle | Cool Python Turtle Graphics | Python Turtle coding| coding I'm trying to create a PYTHON LOGO using turtle module. However, I'm stuck on this and don't know how to proceed. CODE BEGIN from turtle import * speed(100) #blue part pencolor('#4584b6') fillcolor('#4584b6') begin_fill() penup() goto(-70,20) left(180) pendown() forward(10) def curve(): for i in range(50): forward(0.5) right(1) for i in range(80): forward(2) right(1) for i in range(50): forward(0.5) right(1) curve() def line(): forward(130) left(90) forward(10) left(90) forward(90) right(90) forward(30) line() curve() forward(80) for i in range(90): forward(0.5) right(1) forward(120) for i in range(90): forward(0.5) left(1) forward(72.7) right(90) right(1) forward(19) end_fill() penup() goto(160,186) right(180) pendown() #yellow part pencolor('ffde57') fillcolor('ffde57') begin_fill() forward(10) curve() line() curve() forward(80) for i in range(90): forward(0.5) right(1) forward(120) for i in range(90): forward(0.5) left(1) forward(72.7) right(90) right(1) forward(19) end_fill() penup() goto(-20,210) pendown() #circledots pencolor('white') fillcolor('white') begin_fill() circle(10) end_fill() pencolor('blue') penup() goto(110,-30) pendown() pencolor('white') fillcolor('white') begin_fill() circle(10) end_fill() hideturtle() done()
Taking advantage of turtle's methods, we can come up with an approximation of the Python logo with less code: from turtle import Screen, Turtle def curved_box(t, sides): for _ in range(sides): t.circle(90, extent=90) t.forward(120) t.circle(90, extent=90) def snake(t, color): t.backward(16) t.left(90) t.forward(16) t.right(90) t.fillcolor(color) t.begin_fill() t.forward(64) curved_box(t, 2) t.forward(44) t.left(90) t.forward(152) t.right(90) t.forward(16) t.right(90) t.forward(204) curved_box(t, 1) t.forward(44) t.left(90) t.forward(60) t.circle(-90, extent=90) t.forward(64) t.end_fill() t.backward(86) t.left(90) t.forward(224) t.dot(48, 'white') t.backward(224) t.right(90) t.forward(86) screen = Screen() turtle = Turtle() turtle.hideturtle() turtle.speed('fastest') turtle.penup() snake(turtle, '#3774A8') turtle.left(180) snake(turtle, '#F6D646') screen.exitonclick() But it's still only an approximation:
4
3
72,118,600
2022-5-4
https://stackoverflow.com/questions/72118600/subtract-first-and-last-element-wrap-around-in-numpy-diff
I have a large 100000,6 array and would like to find the diffence to the each element in the vector. np.diff is almost exactly what I need but also want it to wrap around and it also finds the differnce in the first and last element. Toy model: array=np.array([[0,2,4],[0,3,6]]) np.diff(array,axis=1) gives [[2,2],[3,3]] would like to have [[2,2,-4],[3,3,-6]] or [[-4,2,2],[-6,3,3]] Is there a built in way in numpy to do this?
You can use numpy.roll: np.roll(array, -1, axis=1)-array Output: array([[ 2, 2, -4], [ 3, 3, -6]])
4
3
72,118,249
2022-5-4
https://stackoverflow.com/questions/72118249/why-are-the-branchless-and-built-in-functions-slower-in-python
I found 2 branchless functions that find the maximum of two numbers in python, and compared them to an if statement and the built-in max function. I thought the branchless or the built-in functions would be the fastest, but the fastest was the if-statement function by a large margin. Does anybody know why this is? Here are the functions: If-statement (2.16 seconds for 25000 operations): def max1(a, b): if a > b: return a return b Built-in (4.69 seconds for 25000 operations): def max2(a, b): return max(a, b) Branchless 1 (4.12 seconds for 25000 operations): def max3(a, b): return (a > b) * a + (a <= b) * b Branchless 2 (5.34 seconds for 25000 operations): def max4(a, b): diff = a - b return a - (diff & diff >> 31)
Your expectations about branching vs. branchless code apply to low-level languages like assembly and C. Branchless code can be faster in low-level languages because it prevents slowdowns caused by branch prediction misses. (Note: this means branchless code can be faster, but it will not necessarily be.) Python is a high-level language. Assuming you are using the CPython interpreter: for every bytecode instruction you execute, the interpreter has to branch on the kind of opcode, and typically many other things. For example, even the simple < operator requires a branch to check for the < opcode, another branch to check whether the object's class implements a __lt__ method, more branches to check whether the right-hand-side value is of a valid type for the comparison to be performed, and probably several other branches. Even your so-called "branchless" code will in practice result in a lot of branching for these reasons. Because Python is so high-level, each bytecode instruction is actually doing quite a lot of work compared to a single machine-code instruction. So the performance of simple code like this will mainly depend on how many bytecode instructions have to be interpreted: Your max1 function has to do three loads of local variables, a comparison, a conditional jump and a return. That's six. Your max2 function does two loads of local variables, one load of a global variable (referencing the built-in max), and also makes a function call; that requires passing arguments, and is relatively expensive compared to other bytecode instructions. On top of that, the built-in function itself has to do the same work as your own max1 function, so no wonder max2 is slower. Your max3 function does six loads of local variables, two comparisons, two multiplications, one addition, and one return. That's twelve instructions, so we should expect it to take about twice as long as max1. Likewise max4 does five loads of local variables, one store to a local variable, one load of a constant, two subtractions, one bitshift, one bitwise "and", and one return. That's twelve instructions again. That said, note that if we compare your max1 with the built-in function max directly, instead of your max2 which has an extra function call, your max1 function is still a bit faster than the built-in max. This is probably because the built-in max accepts a variable number of arguments, which may involve building a tuple of arguments, and the built-in max function also has a branch to check if it was called with a single iterable argument (e.g. max([3, 1, 4, 2])), and handle that case differently; your max1 function doesn't do those things.
11
15
72,115,626
2022-5-4
https://stackoverflow.com/questions/72115626/why-does-the-stripe-signature-header-never-match-the-signature-of-request-body
I'm using Python with the Django Rest framework and am trying to receive webhook events correctly from stripe. However I constantly get this error: stripe.error.SignatureVerificationError: No signatures found matching the expected signature for payload This is the code: WEBHOOK_SECRET = settings.STRIPE_WEBHOOK_SK @csrf_exempt def webhook(request): sig_header = request.headers.get('Stripe-Signature', None) payload = request.body try: event = stripe.Webhook.construct_event( payload=payload, sig_header=sig_header, secret=WEBHOOK_SECRET ) except ValueError as e: raise e except stripe.error.SignatureVerificationError as e: raise e return HttpResponse(status=200) I have also tried modifying the request body format like so: payload = request.body.decode('utf-8') # and also payload = json.loads(request.body) And yet no luck. The error is coming from the verify_header() class method inside the WebhookSignature class. This is the part of the method where it fails: if not any(util.secure_compare(expected_sig, s) for s in signatures): raise error.SignatureVerificationError( "No signatures found matching the expected signature for payload", header, payload, ) So I printed out exptected_sig and signatures before this line and found that regardless of what format request.body is in, signatures is always there (which is good), but they never match the signature from the header. Why is this?
When Stripe calculates the signature for the Event it sends you, it uses a specific "payload" representing the entire Event's content. The signature is done on that exact payload and any change to it such as adding a new line, removing a space or changing the order of the properties will change the payload and the corresponding signature. When you verify the signature, you need to make sure that you pass the exact raw payload that Stripe sent you, otherwise the signature you calculate won't match the Stripe one. Frameworks can sometimes try to be helpful when receiving a request and they detect JSON and automatically parse it for you. This means that you think you are getting the "raw payload/body" but really you get an alternate version. It has the same content but it doesn't match what Stripe sent you. This is fairly common with Express in Node.js for example. So, as the developer, you have to explicitly request the exact raw/original payload Stripe sent you. And how to do this can differ based on a variety of factors. There are 2 issues on the stripe-node github with numerous potential fixes here and here. With Django, the same can happen and you need to make sure that your code requests the raw payload. You seem to use request.body as expected but that's one thing you want to dig into further. Additionally, another common mistake is using the wrong Webhook secret. If you use the Stripe CLI for example, it creates a new secret for you that is different from the one you see in the Dashboard for this Webhook Endpoint. You need to make sure you use the correct secret based on the environment you're in.
4
7
72,113,469
2022-5-4
https://stackoverflow.com/questions/72113469/why-are-python-project-files-copied-after-installing-requirements-in-dockerfile
Take the sample from https://docs.docker.com/language/python/build-images/ for instance: # ... COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY . . # ... vs # ... COPY . . RUN pip3 install -r requirements.txt # ... What are the disadvantages of the latter?
Docker checks every ADD and COPY statement to see if any files have changed and invalidate the cache for it and every later step if it has. So, in the later, after changes in your code, all requirements will be reinstalled.
4
1
72,112,131
2022-5-4
https://stackoverflow.com/questions/72112131/change-figure-size-in-matplotlib
This is my code and I want to make the plot bigger so it's easier to read. Here I'm trying to get feature importance based on mean decrease in impurity. I'm getting some output but since my barplot has 63 bins, I want it much bigger. I tried everything that is commented. Can someone please suggest how can I make this bar plot more readable? import pandas as pd from matplotlib.pyplot import figure # fig.set_figheight(20) # fig.set_figwidth(20) #plt_1 = plt.figure(figsize=(20, 15), dpi = 100) forest_importances = pd.Series(importances, index=feature_names) fig, ax = plt.subplots() forest_importances.plot.bar(yerr=std, ax=ax) ax.set_title("Feature importances using MDI") ax.set_ylabel("Mean decrease in impurity") # fig = plt.figure() fig.tight_layout() plt.show()
Use fig.set_size_inches before plt.show(): width = 8 height = 6 fig.set_size_inches(width, height)
4
7
72,107,669
2022-5-4
https://stackoverflow.com/questions/72107669/how-to-get-all-possible-parentheses-combinations-for-an-expression-with-python
Given a list of several elements, find all the possible parentheses combinations. For example with [1, 2, 3, 4], it would return [ [1,2,3,4], [[1,2],3,4], [[1,2],[3,4]], [1,[2,3],4], [1,2,[3,4]], [[1,2,3],4], [[[1,2],3],4], [[1,[2,3]],4], [1,[2,3,4]], [1,[[2,3],4]], [1,[2,[3,4]]] ] in no paticular order. PLEASE READ: Before you mark this as a duplicate of How to print all possible balanced parentheses for an expression?, although similar, it is a slightly different question than that. In that question, it only asks for parentheses expressions where every value is surrounded. This question however asks for every single combination regardless of whether every element is within parentheses.
To list all the possible trees from the list: Iterate on all the possible number of children from the root; For a chosen number of children, iterate on all the possible ways to split the list into that number of sublists; Recursively find all the possible subtrees of the sublists; Combine all the possible subtrees of the children using itertools.product. from itertools import product, combinations, pairwise, chain def all_trees(seq): if len(seq) <= 1: yield from seq else: for n_children in range(2, len(seq)+1): for breakpoints in combinations(range(1, len(seq)), n_children-1): children = [seq[i:j] for i,j in pairwise(chain((0,), breakpoints, (len(seq)+1,)))] yield from product(*(all_trees(child) for child in children)) Testing: for seq in ([], [1], [1,2], [1,2,3], [1,2,3,4]): print(seq) print(list(all_trees(seq))) print() [] [] [1] [1] [1, 2] [(1, 2)] [1, 2, 3] [(1, (2, 3)), ((1, 2), 3), (1, 2, 3)] [1, 2, 3, 4] [(1, (2, (3, 4))), (1, ((2, 3), 4)), (1, (2, 3, 4)), ((1, 2), (3, 4)), ((1, (2, 3)), 4), (((1, 2), 3), 4), ((1, 2, 3), 4), (1, 2, (3, 4)), (1, (2, 3), 4), ((1, 2), 3, 4), (1, 2, 3, 4)]
5
1
72,111,280
2022-5-4
https://stackoverflow.com/questions/72111280/valueerror-tokenizer-class-mariantokenizer-does-not-exist-or-is-not-currently-i
Get this error when trying to run a MarianMT-based nmt model. Traceback (most recent call last): File "/home/om/Desktop/Project/nmt-marionmt-api/inference.py", line 45, in <module> print(batch_inference(model_path="en-ar-model/Mark2", text=text)) File "/home/om/Desktop/Project/nmt-marionmt-api/inference.py", line 15, in batch_inference tokenizer = AutoTokenizer.from_pretrained(model_path, local_file_only=True) File "/home/om/.virtualenvs/marianmt-api/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 525, in from_pretrained raise ValueError( ValueError: Tokenizer class MarianTokenizer does not exist or is not currently imported.
Installing SentencePiece worked for me. pip install sentencepiece
4
1
72,052,908
2022-4-29
https://stackoverflow.com/questions/72052908/how-to-return-and-download-excel-file-using-fastapi
How do I return an excel file (version: Office365) using FastAPI? The documentation seems pretty straightforward. But, I don't know what media_type to use. Here's my code: import os from fastapi import FastAPI from fastapi.responses import FileResponse from pydantic import BaseModel from typing import Optional excel_file_path = r"C:\Users\some_path\the_excel_file.xlsx" app = FastAPI() class ExcelRequestInfo(BaseModel): client_id: str @app.post("/post_for_excel_file/") async def serve_excel(item: ExcelRequestInfo): # (Generate excel using item.) # For now, return a fixed excel. return FileResponse( path=excel_file_path, # Swagger UI says 'cannot render, look at console', but console shows nothing. media_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' # Swagger renders funny chars with this argument: # 'application/vnd.ms-excel' ) Assuming I get it right, how to download the file? Can I use Swagger UI generated by FastAPI to view the sheet? Or, curl? Ideally, I'd like to be able to download and view the file in Excel. Solution Here's my final (edited) solution to save you from clicking about. In the course of development, I had to switch from a FileResponse to Response that returns io.BytesIO. import io import os.path from fastapi.responses import Response @router.get("/customer/{customer}/sheet") async def generate_excel(customer: str): excel_file_path: str = None buffer: io.BytesIO = None # Generate the sheet. excel_file_path, buffer = make_excel(customer=customer) # Return excel back to client. headers = { # By adding this, browsers can download this file. 'Content-Disposition': f'attachment; filename={os.path.basename(excel_file_path)}', # Needed by our client readers, for CORS (cross origin resource sharing). "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Headers": "*", "Access-Control_Allow-Methods": "POST, GET, OPTIONS", } media_type = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' return Response( content=buffer.getvalue(), headers=headers, media_type=media_type )
You could set the Content-Disposition header using the attachment parameter, indicating to the web browser that the file should be downloaded, as described in the answers here and here. Swagger UI will provide a Download file link for you to download the file, as soon as you execute the request. headers = {'Content-Disposition': 'attachment; filename="Book.xlsx"'} return FileResponse(excel_file_path, headers=headers) To have the file viewed in the web browser, one can use the inline, instead of attachment, parameter in the Content-Disposition header, as explained in the linked answers earlier. However, for the browser to be able to display the Excel file, one should set the correct media_type in the FileResponse (for Excel files see here), as well as .xlsx (or .xls) must be a known file extension to the browser (this is usually achieved through web browser extensions/plug-ins).
6
11
72,064,986
2022-4-30
https://stackoverflow.com/questions/72064986/mathematical-explanation-of-leetcode-question-container-with-most-water
I was working on a medium level leetcode question 11. Container With Most Water. Besides the brute force solution with O(n^2), there is an optimal solution with complexity of O(n) by using two pointers from left and right side of the container. I am a little bit confused why this "two pointers" method must include the optimal solution. Does anyone know how to prove the correctness of this algorithm mathematically? This is an algorithm that I don't know of. Thank you! The original question is: You are given an integer array height of length n. There are n vertical lines drawn such that the two endpoints of the ith line are (i, 0) and (i, height[i]). Find two lines that together with the x-axis form a container, such that the container contains the most water. Return the maximum amount of water a container can store. Notice that you may not slant the container. A brutal solution for this question is(O(n^2)): def maxArea(self, height: List[int]) -> int: length = len(height) volumn = 0 #calculate all possible combinations, and compare one by one: for position1 in range(0,length): for position2 in range (position1 + 1, length): if min(height[position1],height[position2])*(position2 - position1) >=volumn: volumn = min(height[position1],height[position2])*(position2 - position1) else: volumn = volumn return volumn Optimal solution approach, The code I wrote is like this(O(n)): def maxArea(self, height: List[int]) -> int: pointerOne, pointerTwo = 0, len(height)-1 maxVolumn = 0 #Move left or right pointer one step for whichever is smaller while pointerOne != pointerTwo: if height[pointerOne] <= height[pointerTwo]: maxVolumn = max(height[pointerOne]*(pointerTwo - pointerOne), maxVolumn) pointerOne += 1 else: maxVolumn = max(height[pointerTwo]*(pointerTwo - pointerOne), maxVolumn) pointerTwo -= 1 return maxVolumn Does anyone know why this "two pointers" method can find the optimal solution? Thanks!
Based on my understanding the idea is roughly: Starting from the farthest-apart bars (i.e. first and last bar) and then narrowing width to find potentially better pair(s). Steps: We need to have the ability to loop over all 'potential' candidates (the candidates better than what we have on hand rather than all candidates as you did in brutal solution) thus starting from outside bars and no inner pairs will be missed. If an inner bar pair does exist, it means the height is higher than bars we have on hand, so you should not just #Move left or right pointer one step but #Move left or right pointer to next taller bar . Why #Move left or right pointer whichever is smaller? Because the smaller bar doesn't fulfill the 'potential' of the taller bar. The core idea behind the steps is: starting from somewhere that captures optimal solution inside (step 1), then by each step you are reaching to a better solution than what you have on hand (step 2 and 3), and finally you will reach to the optimal solution. One question left for you to think about: what makes sure the optimal solution is not missed when you are executing the steps above? :)
5
2
72,103,359
2022-5-3
https://stackoverflow.com/questions/72103359/format-a-jupyter-notebook-on-save-in-vscode
I use black to automatically format all of my Python code whenever I save in VSCode. I'd like the same functionality, but within a Jupyter notebook in VSCode. This answer shows how to right click and format a cell or a whole notebook from the right click context menu, or a keyboard shortcut. Can I make this happen on save instead? It looks like there is an issue related to this, but it is over a year old. Are there any good workarounds? Maybe a way to set the format notebook option to the same keybinding as save? UPDATE: If you like me want this functionality to be added please go to the issue and upvote it, the devs said they will need a bunch of upvotes before it's considered. UPDATE: This got enough attention that it has been added!
Good news! This is now an option in the newest VSCode release (1.77) Setting "notebook.formatOnSave.enabled": true will do the trick. You can read more about it here. If you have black already enabled for Python it should work fine.
15
20
72,093,397
2022-5-2
https://stackoverflow.com/questions/72093397/how-do-you-input-and-output-text-with-pyscript
I’m learning py-script where you can use <py-script></py-script> in an HTML5 file to write Python Code. As a python coder, I would like to try web development while still using python, so it would be helpful if we could output and input information using py-script. For example, could someone explain how to get this function to work: <html> <head> <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> </head> <body> <div>Type an sample input here</div> <input id = “test_input”></input> <-- How would you get this button to display the text you typed into the input into the div with the id, “test”--!> <button id = “submit-button” onClick = “py-script-function”> <div id = “test”></div> <div <py-script> <py-script> </body> </html I would appreciate it and I hope this will also help the other py-script users.
I checked source code on GitHub and found folder examples. Using files todo.html and todo.py I created this index.html (which I tested using local server python -m http.server) Some elements I figured out because I have some experience with JavaScript and CSS - so it could be good to learn JavaScript and CSS to work with HTML elements. <!DOCTYPE html> <html> <head> <!--<link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" />--> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> </head> <body> <div>Type an sample input here</div> <input type="text" id="test-input"/> <button id="submit-button" type="submit" pys-onClick="my_function">OK</button> <div id="test-output"></div> <py-script> from js import console def my_function(*args, **kwargs): #print('args:', args) #print('kwargs:', kwargs) console.log(f'args: {args}') console.log(f'kwargs: {kwargs}') text = Element('test-input').element.value #print('text:', text) console.log(f'text: {text}') Element('test-output').element.innerText = text </py-script> </body> </html> Here screenshot with JavaScript console in DevTool in Firefox. It needed longer time to load all modules (from Create pyodine runtime to Collecting nodes...) Next you can see outputs from console.log(). You may also use print() but it shows text with extra error writing to undefined ....
6
10
72,068,789
2022-4-30
https://stackoverflow.com/questions/72068789/keeping-both-dataframe-indexes-on-merge
I'm sure this question must have already been answered somewhere but I couldn't find an answer that suits my case. I have 2 pandas DataFrames a = pd.DataFrame({'A1':[1,2,3], 'A2':[2,4,6]}, index=['a','b','c']) b = pd.DataFrame({'A1':[3,5,6], 'A2':[3,6,9]}, index=['a','c','d']) I want to merge them in order to obtain something like result = pd.DataFrame({ 'A1' : [3,2,5,6], 'A2' : [3,4,6,9] }, index=['a','b','c','d']) Basically, I want a new df with the union of both indexes. Where indexes match, the value in each column should be updated with the one from the second df (in this case b). Where there is no match the value is taken from the starting df (in this case a). I tried with merge(), join() and concat() but I could not manage to obtain this result.
You could use pd.concat to create one dataframe (b being the first one as it is b that has a priority for it's values to be kept over a), and then drop the duplicated index: Using your sample data: c = pd.concat([b,a]) c[~c.index.duplicated()].sort_index() prints: A1 A2 a 3 3 b 2 4 c 5 6 d 6 9
4
3
72,036,397
2022-4-27
https://stackoverflow.com/questions/72036397/boto3-keyconditionexpression-on-both-partition-and-sort-key
So i have the following schema: domain (partition key) time_stamp (sort key) I am (attempting) to use boto to query dynamo. I want to return all records with a given domain, after a given time_stamp. I have tried a couple of different approaches to no avail: First is my ideal approach. I believe it to be much cleaner it also uses between which i would like to incorporate if possible. resp = table.query( KeyConditionExpression=Key('domain').eq(domain) and Key('time_stamp').between(start,end), ProjectionExpression= 'time_stamp,event_type,country,tablet,mobile,desktop') ERROR: An error occurred (ValidationException) when calling the Query operation: Query condition missed key schema element: domain Here is the second approach that I took. It appeared more often in docs and other stack questions that I found. Yet, I am still getting an error. resp = table.query( ExpressionAttributeNames={ '#nd': 'domain', '#nts': 'time_stamp' }, ExpressionAttributeValues={ ':vd': domain, ':vts': start, }, KeyConditionExpression='(#nd = :vd) AND (#nts > :vts)', ProjectionExpression= 'time_stamp,event_type,country,tablet,mobile,desktop') ERROR: An error occurred (ValidationException) when calling the Query operation: One or more parameter values were invalid: Condition parameter type does not match schema type I believe this error may be related to the data type of time_stamp being a string and not the Decimal() format that dynamo uses to represent numbers.
You have to use & not and. resp = table.query( KeyConditionExpression=Key('domain').eq(domain) & Key('time_stamp').between(start,end), ProjectionExpression= 'time_stamp,event_type,country,tablet,mobile,desktop') This is where the AWS docs say this, but it's kind of buried in there.
4
4
72,083,071
2022-5-2
https://stackoverflow.com/questions/72083071/mypy-doesnt-recognize-sqlalchemy-columns-with-hybrid-property
I'm trying to use mypy with SQLAlchemy. In order to validate/modify specific column value (email in this case), SQLAlchemy official document provides hybrid_property decorator. The problem is, mypy doesn't recognize EmailAddress class constructor properly, it gives: email_address.py:31: error: Unexpected keyword argument "email" for "EmailAddress"; did you mean "_email"? How can I tell mypy to recognize these columns? from typing import TYPE_CHECKING from sqlalchemy import Column, Integer, String from sqlalchemy.ext.declarative import declarative_base # I don't even like the following statements just for setter if TYPE_CHECKING: hybrid_property = property else: from sqlalchemy.ext.hybrid import hybrid_property Base = declarative_base() class EmailAddress(Base): __tablename__ = "email_address" id = Column(Integer, primary_key=True) _email = Column("email", String) @hybrid_property def email(self): return self._email @email.setter def email(self, email): self._email = email EmailAddress(email="[email protected]") # email_address.py:31: error: Unexpected keyword argument "email" for "EmailAddress"; did you mean "_email"? I'm using following packages: SQLAlchemy==1.4.35 mypy==0.942 mypy-extensions==0.4.3 sqlalchemy2-stubs==0.0.2a22
OK, it seems like I finally found a way to solve the problem. This reminds me of uncooperative behaviors between dataclass/property decorators discussed in here. I end up with splitting EmailAddress class into 2: Use @dataclass decorator on base class in order to indicate constructor options. Override email property so that mypy doesn't complain redef. from dataclasses import dataclass from typing import TYPE_CHECKING, Optional from sqlalchemy import Column, Integer, String, Table from sqlalchemy.orm import registry mapper_registry: registry = registry() # I don't even like the following statements just for setter if TYPE_CHECKING: hybrid_property = property else: from sqlalchemy.ext.hybrid import hybrid_property @dataclass @mapper_registry.mapped class EmailAddressBase: __tablename__ = "email address" id: int = Column(Integer, primary_key=True) email: Optional[str] = None class EmailAddress(EmailAddressBase): _email = Column("email", String) @hybrid_property def email(self): return self._email @email.setter def email(self, email): self._email = email email = EmailAddress(email="[email protected]") print(email.email)
6
0
72,029,857
2022-4-27
https://stackoverflow.com/questions/72029857/no-module-named-tensorflow-compat
I'm trying to use the code from the Teachable Machine website: from keras.models import load_model from PIL import Image, ImageOps import numpy as np # Load the model model = load_model('keras_model.h5') # Create the array of the right shape to feed into the keras model # The 'length' or number of images you can put into the array is # determined by the first position in the shape tuple, in this case 1. data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32) # Replace this with the path to your image image = Image.open('<IMAGE_PATH>') #resize the image to a 224x224 with the same strategy as in TM2: #resizing the image to be at least 224x224 and then cropping from the center size = (224, 224) image = ImageOps.fit(image, size, Image.ANTIALIAS) #turn the image into a numpy array image_array = np.asarray(image) # Normalize the image normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1 # Load the image into the array data[0] = normalized_image_array # run the inference prediction = model.predict(data) print(prediction) but when running the code, I get the following error: ModuleNotFoundError: No module named 'tensorflow.compat' I tried running the code on two separate machines, uninstalling and re-installing tensorflow, pip, keras, nothing seemed to help. I'm using Python 3.9 and tensorflow 2.8.0
This just happened to me but I figured it out. Your .py script filename is the same with one of the files of the tensorflow library. You can just rename your python script and it will work fine.
10
7
72,102,435
2022-5-3
https://stackoverflow.com/questions/72102435/how-to-install-python-3-6-on-ubuntu-22-04
I need to install this specific Python version, to prepare a developer environment, because I'm maintaining a system with multiple libraries based on python 3.6.9. I recently installed Ubuntu 22.04 on my laptop, but I had no success trying to install this python version. I tried to install with apt-get after adding the deadsnake repository, but this python version is not available. I tried installing from source by compiling, but it did not work. Running sudo make altinstall exited with this error: Segmentation fault (core dumped) make: *** [Makefile:1112: altinstall] Erro 139
I have faced the same problems and could make it work by adding some additional flags when running ./configure Here are my steps: Step 1 – Prerequsities sudo apt-get install -y make build-essential libssl-dev zlib1g-dev \ libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev \ libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev \ libgdbm-dev libnss3-dev libedit-dev libc6-dev Step 2 – Download Python 3.6 wget https://www.python.org/ftp/python/3.6.15/Python-3.6.15.tgz tar -xzf Python-3.6.15.tgz Step 3 – Compile Python Source cd Python-3.6.15 ./configure --enable-optimizations -with-lto --with-pydebug make -j 8 # adjust for number of your CPU cores sudo make altinstall Step 4 – Check the Python Version python3.6 -V
41
96
72,087,819
2022-5-2
https://stackoverflow.com/questions/72087819/pydantic-set-attribute-field-to-model-dynamically
According to the docs: allow_mutation whether or not models are faux-immutable, i.e. whether setattr is allowed (default: True) Well I have a class : class MyModel(BaseModel): field1:int class Config: allow_mutation = True If I try to add a field dynamically : model1 = MyModel(field1=1) model1.field2 = 2 And I get this error : File "pydantic/main.py", line 347, in pydantic.main.BaseModel.__setattr__ ValueError: "MyModel" object has no field "field2" Obviously, using setattr method will lead to the same error. setattr(model1, 'field2', 2) Output: File "pydantic/main.py", line 347, in pydantic.main.BaseModel.__setattr__ ValueError: "MyModel" object has no field "field2" What did I miss here ?
You can use the Config object within the class and set the extra attribute to "allow" or use it as extra=Extra.allow kwargs when declaring the model Example from the docs : from pydantic import BaseModel, ValidationError, Extra class Model(BaseModel, extra=Extra.forbid): a: str try: Model(a='spam', b='oh no') except ValidationError as e: print(e) """ 1 validation error for Model b extra fields not permitted (type=value_error.extra) """
12
9
72,032,032
2022-4-27
https://stackoverflow.com/questions/72032032/importerror-cannot-import-name-iterable-from-collections-in-python
Working in Python with Atom on a Mac. Code: from rubik.cube import Cube from rubik_solver import utils Full error: Traceback (most recent call last): File "/Users/Audey/Desktop/solver.py", line 2, in <module> from rubik_solver import utils File "/Users/Audey/Library/Python/3.10/lib/python/site-packages/rubik_solver/utils.py", line 4, in <module> from past.builtins import basestring File "/Users/Audey/Library/Python/3.10/lib/python/site-packages/past/builtins/__init__.py", line 43, in <module> from past.builtins.noniterators import (filter, map, range, reduce, zip) File "/Users/Audey/Library/Python/3.10/lib/python/site-packages/past/builtins/noniterators.py", line 24, in <module> from past.types import basestring File "/Users/Audey/Library/Python/3.10/lib/python/site-packages/past/types/__init__.py", line 25, in <module> from .oldstr import oldstr File "/Users/Audey/Library/Python/3.10/lib/python/site-packages/past/types/oldstr.py", line 5, in <module> from collections import Iterable ImportError: cannot import name 'Iterable' from 'collections' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) The from rubik_solver import utils is what is causing the error as when I remove it the error does not appear. I am not sure what is causing the error and hove checked there code and found it on other sources so am sure that it should work. Any solves?
The Iterable abstract class was removed from collections in Python 3.10. See the deprecation note in the 3.9 collections docs. In the section Removed of the 3.10 docs, the item Remove deprecated aliases to Collections Abstract Base Classes from the collections module. (Contributed by Victor Stinner in bpo-37324.) is what results in your error. You can use Iterable from collections.abc instead, or use Python 3.9 if the problem is in a dependency that can't be updated.
50
88
72,097,725
2022-5-3
https://stackoverflow.com/questions/72097725/converting-py-files-to-ipynb
My organisation converts any Jupyter Notebooks (.ipynb files) it makes into python scripts (.py files) for easier management in our repos. I need to convert them back so I can run the notebooks but can't work out how. I believe they've been encoded using the nbconvert package but I couldn't find a way to convert the files back in the package docs. I've included the head of one of the .py files bellow in case it makes the encoding format more obvious. # --- # jupyter: # jupytext: # formats: ipynb,py:light # text_representation: # extension: .py # format_name: light # format_version: '1.4' # jupytext_version: 1.2.0 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Title/filename here # ### Initialise import pandas as pd import numpy as np # etc # ### Title here # + do_stuff()
Looks like the file was actually converted using program jupytext. To convert it back I found running jupytext --to ipynb my_file.py worked. More simply: opening the .py file in jupyter notebook allowed me to view the .py file as if it were a .ipynb file without converting.
4
4
72,097,417
2022-5-3
https://stackoverflow.com/questions/72097417/segfault-using-htop-on-aws-sagemaker-pytorch-1-10-cpu-py38-app
I am trying to launch the htop command in the Pytorch 1.10 - Python 3.8 CPU optimized AWS Sagemaker container. This works fine in other images I have used till now, but in this one, the command fails with a segfault: htop htop: /opt/conda/lib/libncursesw.so.6: no version information available (required by htop) htop: /opt/conda/lib/libncursesw.so.6: no version information available (required by htop) htop: /opt/conda/lib/libncursesw.so.6: no version information available (required by htop) Segmentation fault (core dumped) More info : htop --version htop: /opt/conda/lib/libncursesw.so.6: no version information available (required by htop) htop: /opt/conda/lib/libncursesw.so.6: no version information available (required by htop) htop: /opt/conda/lib/libncursesw.so.6: no version information available (required by htop) htop 2.2.0 - (C) 2004-2019 Hisham Muhammad Released under the GNU GPL.
I fixed this with # Note: add sudo if needed: ln -fs /lib/x86_64-linux-gnu/libncursesw.so.6 /opt/conda/lib/libncursesw.so.6
4
3
72,073,919
2022-5-1
https://stackoverflow.com/questions/72073919/graph-tool-stochastic-block-model-vs-leiden
I'm calculating network communities for 4 networks using 2 methods: 'Leiden' method, which gives me 7 (a), 13 (b), 19 (c), 22 (d) communities. 'Stochastic block Model', also checking group membership of the nodes by inspecting levels of the hierarchy, like so: state = gt.inference.minimize_nested_blockmodel_dl(g) state.print_summary() levels = state.get_levels() for s in levels: print(s) if s.get_N() == 1: break lstate = state.levels[0] b = lstate.get_blocks() print(b[10]) which prints: <BlockState object with 228 blocks (21 nonempty), degree-corrected, for graph <Graph object, undirected, with 228 vertices and 1370 edges, 1 internal vertex property, 1 internal edge property, at 0x7fbaff1c8d50>, at 0x7fba9fac1bd0> <BlockState object with 21 blocks (6 nonempty), for graph <Graph object, undirected, with 228 vertices and 96 edges, at 0x7fb9a3c51910>, at 0x7fb9a2dd1a10> <BlockState object with 6 blocks (1 nonempty), for graph <Graph object, undirected, with 21 vertices and 20 edges, at 0x7fb9a3c51590>, at 0x7fb9a3c51ed0> <BlockState object with 1 blocks (1 nonempty), for graph <Graph object, undirected, with 6 vertices and 1 edge, at 0x7fb9a6f034d0>, at 0x7fb9a3c51790> 190 <Graph object, undirected, with 3459 vertices and 134046 edges, 1 internal vertex property, 1 internal edge property, at 0x7fbb62e22790> l: 0, N: 3459, B: 294 l: 1, N: 294, B: 85 l: 2, N: 85, B: 34 l: 3, N: 34, B: 12 l: 4, N: 12, B: 4 l: 5, N: 4, B: 1 l: 6, N: 1, B: 1 and draws: This looks like having WAY more communities than using Leiden, and I'm trying to wrap my head around why, as well as this SBM concept. Are these SBM graphs depicting adicional levels of hierarchy or is there something else going on here that justifies so many more communities?
The method of modularity maximization (of which Leiden is an implementation) has two important properties: It only searches for assortative communities (i.e. groups with more internal connections than external ones). It is a statistically inconsistent method, that will both overfit and underfit, depending on the situation. A discussion on this matter can be found here: https://skewed.de/tiago/blog/modularity-harmful The SBM inference method is different in both counts: It finds groups with arbitrary mixing patterns, i.e. preferences of connections to other groups. Assortativity is a special case, but there are many others possible patterns. It achieves this in a statistically principled manner, avoiding both overfitting and underfitting. For a theoretical introduction, see: https://arxiv.org/abs/1705.10225. For a discussion on the differences between inferential and non-inferential methods see: https://arxiv.org/abs/2112.00183 Because of the above, one should not expect SBM inference and Leiden/Louvain to yield similar answers in general. Now, for whatever reason, you may be interested to find only assortative communities. You can also do that with the SBM, but using a more constrained parametrization. You can do this with graph-tool as explained here: https://graph-tool.skewed.de/static/doc/demos/inference/inference.html#assortative-community-structure
4
3
72,044,314
2022-4-28
https://stackoverflow.com/questions/72044314/how-to-validate-data-received-via-the-telegrams-web-app
I'm trying to validate WebApp data but the result is not what I wanted. Telegram documentation: data_check_string = ... secret_key = HMAC_SHA256(<bot_token>, "WebAppData") if (hex(HMAC_SHA256(data_check_string, secret_key)) == hash) { // data is from Telegram } MyCode: BOT_TOKEN = '5139539316:AAGVhDje2A3mB9yA_7l8-TV8xikC7KcudNk' data_check_string = 'query_id=AAGcqlFKAAAAAJyqUUp6-Y62&user=%7B%22id%22%3A1246866076%2C%22first_name%22%3A%22Dante%22%2C%22last_name%22%3A%22%22%2C%22username%22%3A%22S_User%22%2C%22language_code%22%3A%22en%22%7D&auth_date=1651689536&hash=de7f6b26aadbd667a36d76d91969ecf6ffec70ffaa40b3e98d20555e2406bfbb' data_check_arr = data_check_string.split('&') needle = 'hash=' hash_item = '' telegram_hash = '' for item in data_check_arr: if item[0:len(needle)] == needle: telegram_hash = item[len(needle):] hash_item = item data_check_arr.remove(hash_item) data_check_arr.sort() data_check_string = "\n".join(data_check_arr) secret_key = hmac.new("WebAppData".encode(), BOT_TOKEN.encode(), hashlib.sha256).digest() calculated_hash = hmac.new(data_check_string.encode(), secret_key, hashlib.sha256).hexdigest() print(calculated_hash == telegram_hash) # print False I'm trying to validate webapp data in python, but my code didn't give the intended result. the hash which my code gives me is different from the telegram's one. UPDATE: valid data added, and bot-token has been changed.
You need to unquote data_check_string from urllib.parse import unquote data_check_string = unquote('query_id=AAGcqlFKAAAAAJyqUUp6-Y62&user=%7B%22id%22%3A1246866076%2C%22first_name%22%3A%22Dante%22%2C%22last_name%22%3A%22%22%2C%22username%22%3A%22S_User%22%2C%22language_code%22%3A%22en%22%7D&auth_date=1651689536&hash=de7f6b26aadbd667a36d76d91969ecf6ffec70ffaa40b3e98d20555e2406bfbb') And swap the arguments calculated_hash = hmac.new(secret_key, data_check_string.encode(), hashlib.sha256).hexdigest()
4
1
72,101,578
2022-5-3
https://stackoverflow.com/questions/72101578/using-string-parameter-for-nvidia-triton
I'm trying to deploy a simple model on the Triton Inference Server. It is loaded well but I'm having trouble formatting the input to do a proper inference request. My model has a config.pbtxt set up like this max_batch_size: 1 input: [ { name: "examples" data_type: TYPE_STRING format: FORMAT_NONE dims: [ -1 ] is_shape_tensor: false allow_ragged_batch: false optional: false } ] I've tried using a pretty straightforward python code to setup the input data like this (the outputs are not written but are setup correctly) bytes_data = [input_data.encode('utf-8')] bytes_data = np.array(bytes_data, dtype=np.object_) bytes_data = bytes_data.reshape([-1, 1]) inputs = [ httpclient.InferInput('examples', bytes_data.shape, "BYTES"), ] inputs[0].set_data_from_numpy(bytes_data) But I keep getting the same error message tritonclient.utils.InferenceServerException: Could not parse example input, value: '[my text input here]' [[{{node ParseExample/ParseExampleV2}}]] I've tried multiple ways of encoding the input, as bytes or even as TFX serving used to ask like this { "instances": [{"b64": "CjEKLwoJdXR0ZXJhbmNlEiIKIAoecmVuZGV6LXZvdXMgYXZlYyB1biBjb25zZWlsbGVy"}]} I'm not exactly sure where the problems comes from if anyone knows?
If anyone gets this same problem, this solved it. I had to create a tf.train.Example() and set the data correctly example = tf.train.Example() example_bytes = str.encode(input_data) example.features.feature['utterance'].bytes_list.value.extend([example_bytes]) inputs = [ httpclient.InferInput('examples', [1], "BYTES"), ] inputs[0].set_data_from_numpy(np.asarray(example.SerializeToString()).reshape([1]), binary_data=False)
4
2
72,074,882
2022-5-1
https://stackoverflow.com/questions/72074882/call-r-object-from-python-with-r-in-a-quarto-document
I try to call an R object from Python inside a Quarto document: --- title: "pandas" format: html jupyter: python3 --- ```{r} data("penguins", package = "palmerpenguins") ``` ```{python} penguins=r.penguins penguins ``` When I execute the chunks one by one in RStudio, everything is okay: > data("penguins", package = "palmerpenguins") > reticulate::repl_python() # automatically executed by RStudio Python 3.10.4 (/Users/.../3.10.4/bin/python3.10) Reticulate 1.24 REPL -- A Python interpreter in R. Enter 'exit' or 'quit' to exit the REPL and return to R. >>> penguins=r.penguins >>> penguins species island bill_length_mm ... body_mass_g sex year 0 Adelie Torgersen 39.1 ... 3750 male 2007 1 Adelie Torgersen 39.5 ... 3800 female 2007 ... However, when I try to render this document, it errors this: --------------------------------------------------------------------------- NameError Traceback (most recent call last) Input In [1], in <cell line: 2>() 1 # Python chunk ----> 2 penguins=r.penguins 3 penguins NameError: name 'r' is not defined According to RMarkdown documentation, nothing else is required (so no e.g. rpy2). I try to add library(reticulate) or reticulate::repl_python() in the R chunk but it doesn't solve the issue. Note: I'm aware of an old unanswered similar question for RMarkdown. Thanks!
Quarto have two engines for render, knitr and jupyter.A related document is here. If we use: --- title: "pandas" format: html --- ```{r} data("penguins", package = "palmerpenguins") ``` ```{python} penguins=r.penguins penguins ``` The engine will be knitr. And while render, knitr will use reticulate(R Interface to Python) to run python code chunk. In this process, knitr will do some magic things to convert the form like r.penguins to the form of reticulate. So the document will be successfully rendered. In other words, knitr made some adaptations to let us could easily run python code chunk with reticulate, and if we don't use knitr engine, we can't use the form like r.penguins. Quarto use r run all code chunks (auto use r package reticulate to run python chunks) when it uses knitr engine. And Quarto use python to run all code chunks when it uses jupyter (jupyter: python3) engine. If we want run r code, we must use module (such as rpy2) in python chunk (not r chunk, codes in r chunk will not be run). We can also use r to run all code chunks by setting jupyter: ir (if we have installed IRkernel).But codes in python chunk will not be run. We must use package (such as reticulate) in r chunk to run python code. This is my understanding. My English is not good, so if some sentence made you puzzled, we could discuss further.
4
7
72,049,386
2022-4-28
https://stackoverflow.com/questions/72049386/sqlalchemy-multi-column-constraint
I have a number of tables in different schemas, I used the pattern from the docs. Some of my tables require multi column constraints and it was unclear what the dictionary key would be for declaring that unique constraint as they mention in the section above In my model below, I'd like to create a unique constraint with name, key, org. I currently have to do this in sql... class Parent(Base): __tablename__ = 'parent' __table_args__ = {'schema': 'example'} id = Column(Integer, primary_key=True) name = Column(String(512)) key = Column(String()) org = Column(String(36))
I think I encountered that issue a while back. If I remember correctly, it was just a matter of moving the "schema dict" to inside a tuple which also contains your constraints. I can try to dig further if that does not work, but the documentation seems to agree that using declarative table configuration via __table_args__ can be a tuple containing positional arguments (like constraints) and as a final argument a dict with keyword arguments like schema is for Table. class Parent(Base): __tablename__ = 'parent' __table_args__ = ( UniqueConstraint('name', 'key', 'org'), {'schema': 'example'}, ) id = Column(Integer, primary_key=True) name = Column(String(512)) key = Column(String()) org = Column(String(36))
4
4
72,106,357
2022-5-3
https://stackoverflow.com/questions/72106357/access-objects-in-pyspark-user-defined-function-from-outer-scope-avoid-pickling
How do I avoid initializing a class within a pyspark user-defined function? Here is an example. Creating a spark session and DataFrame representing four latitudes and longitudes. import pandas as pd from pyspark import SparkConf from pyspark.sql import SparkSession conf = SparkConf() conf.set('spark.sql.execution.arrow.pyspark.enabled', 'true') spark = SparkSession.builder.config(conf=conf).getOrCreate() sdf = spark.createDataFrame(pd.DataFrame({ 'lat': [37, 42, 35, -22], 'lng': [-113, -107, 127, 34]})) Here is the Spark DataFrame +---+----+ |lat| lng| +---+----+ | 37|-113| | 42|-107| | 35| 127| |-22| 34| +---+----+ Enriching the DataFrame with a timezone string at each latitude / longitude via the timezonefinder package. Code below runs without errors from typing import Iterator from timezonefinder import TimezoneFinder def func(iterator: Iterator[pd.DataFrame]) -> Iterator[pd.DataFrame]: for dx in iterator: tzf = TimezoneFinder() dx['timezone'] = [tzf.timezone_at(lng=a, lat=b) for a, b in zip(dx['lng'], dx['lat'])] yield dx pdf = sdf.mapInPandas(func, schema='lat double, lng double, timezone string').toPandas() The above code runs without errors and creates the pandas DataFrame below. The issue is the TimezoneFinder class is initialized within the user-defined function which creates a bottleneck In [4]: pdf Out[4]: lat lng timezone 0 37.0 -113.0 America/Phoenix 1 42.0 -107.0 America/Denver 2 35.0 127.0 Asia/Seoul 3 -22.0 34.0 Africa/Maputo The question is how to get this code to run more like below, where the TimezoneFinder class is initialized once and outside of the user-defined function. As is, the code below generates this error PicklingError: Could not serialize object: TypeError: cannot pickle '_io.BufferedReader' object def func(iterator: Iterator[pd.DataFrame]) -> Iterator[pd.DataFrame]: for dx in iterator: dx['timezone'] = [tzf.timezone_at(lng=a, lat=b) for a, b in zip(dx['lng'], dx['lat'])] yield dx tzf = TimezoneFinder() pdf = sdf.mapInPandas(func, schema='lat double, lng double, timezone string').toPandas() UPDATE - Also tried to use functools.partial and an outer function but still received same error. That is, this approach does not work: def outer(iterator, tzf): def func(iterator: Iterator[pd.DataFrame]) -> Iterator[pd.DataFrame]: for dx in iterator: dx['timezone'] = [tzf.timezone_at(lng=a, lat=b) for a, b in zip(dx['lng'], dx['lat'])] yield dx return func(iterator) tzf = TimezoneFinder() outer = partial(outer, tzf=tzf) pdf = sdf.mapInPandas(outer, schema='lat double, lng double, timezone string').toPandas()
You will need a cached instance of the object on every worker. You could do that as follows instance = [None] def func(iterator: Iterator[pd.DataFrame]) -> Iterator[pd.DataFrame]: if instance[0] is None: instance[0] = TimezoneFinder() tzf = instance[0] for dx in iterator: dx['timezone'] = [tzf.timezone_at(lng=a, lat=b) for a, b in zip(dx['lng'], dx['lat'])] yield dx Note that for this to work, your function would be defined within a module, to give the instance cache somewhere to live. Else you would have to hang it off some builtin module, e.g., os.instance = [].
7
5
72,083,187
2022-5-2
https://stackoverflow.com/questions/72083187/in-django-what-are-media-root-and-media-url-exactly
I read the documentation about MEDIA_ROOT and MEDIA_URL then I could understand them a little bit but not much. MEDIA_ROOT: Absolute filesystem path to the directory that will hold user-uploaded files. MEDIA_URL: URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. I frequently see them as shown below: # "settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' So, what are MEDIA_ROOT and MEDIA_URL exactly?
First of all, I explain about "MEDIA_ROOT" then "MEDIA_URL". <MEDIA_ROOT> "MEDIA_ROOT" sets the absolute path to the directory where uploaded files are stored and setting "MEDIA_ROOT" never ever influence to media file URL. For example, we have a django project: Then, we set "os.path.join(BASE_DIR, 'media')" which is "C:\Users\kai\django-project\media" in Windows in my case to "MEDIA_ROOT": # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # Here MEDIA_URL = '/media/' And set the code below to "urls.py": # "core/urls.py" if settings.DEBUG: urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) And set the model "Image" as shown below: # "myapp/models.py" class Image(models.Model): image = models.ImageField() def __str__(self): return str(self.image) And set the code below to "admin.py": # "myapp/admin.py" from .models import Image admin.site.register(Image) Then, upload the file "orange.jpg": Then, "media" folder is created at the same level as "db.sqlite3" and "manage.py" which is just under the django project root directory and the uploaded file "orange.jpg" is stored in "media" folder as shown below: Then, upload more files: In addition, we can display the file "orange.jpg" by clicking on "orange.jpg" on "Change image" page of the file as shown below: Then, the file "orange.jpg" is displayed as shown below: Be careful, if you remove the code below from "urls.py": # "core/urls.py" if settings.DEBUG: urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) Then, the file "orange.jpg" is not displayed. Instead, there is an error as shown below: Next, you can store uploaded files under more subdirectories and I explain 2 ways to do that and the first way is recommended because it is flexible and the second way is not recommended because it is not flexible at all. The first way to store uploaded files under more subdirectories is first, set "os.path.join(BASE_DIR, 'media')" to "MEDIA_ROOT" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # Here MEDIA_URL = '/media/' And, add "upload_to='images/fruits'" to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): # Here image = models.ImageField(upload_to='images/fruits') def __str__(self): return str(self.image) Then, uploaded files are stored in "C:\Users\kai\django-project\media\images\fruits" in Windows in my case as shown below: The second way to store uploaded files under more subdirectories is first, set 'media/images/fruits' to the second argument of "os.path.join()" as shown below: # "core/settings.py" # Here MEDIA_ROOT = os.path.join(BASE_DIR, 'media/images/fruits') MEDIA_URL = '/media/' And set no arguments to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): image = models.ImageField() # Here def __str__(self): return str(self.image) Then, uploaded files are stored in "C:\Users\kai\django-project\media\images\fruits" in Windows in my case as shown below but as I said before, the first way is recommended because it is flexible while the second way is not flexible at all: In addition, if we don't set "MEDIA_ROOT" as shown below: # "core/settings.py" # MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # Here MEDIA_URL = '/media/' Or set an empty string to the second argument of "os.path.join()" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, '') # Here MEDIA_URL = '/media/' Or don't set the second argument of "os.path.join()" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR) # Here MEDIA_URL = '/media/' And set no arguments to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): image = models.ImageField() # Here def __str__(self): return str(self.image) Then, uploaded files are stored at the same level as "db.sqlite3" and "manage.py" which is just under the django project root directory as shown below: In addition, after uploading files if we change "MEDIA_ROOT", we cannot display uploaded files while we can still display uploaded files even if we change "models.ImageField()". For example, we set "os.path.join(BASE_DIR, 'media')" to "MEDIA_ROOT": # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # Here MEDIA_URL = '/media/' And, set "upload_to='images/fruits'" to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): # Here image = models.ImageField(upload_to='images/fruits') def __str__(self): return str(self.image) Then, upload the file "orange.jpg": Then, click on "images/fruits/orange.jpg" on "Change image" page of the file as shown below: Then, the file "orange.jpg" is displayed as shown below: Now, we change "MEDIA_ROOT" from "os.path.join(BASE_DIR, 'media')" to "os.path.join(BASE_DIR, 'hello/world')": # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'hello/world') # Here MEDIA_URL = '/media/' Then again, click on "images/fruits/orange.jpg" on "Change image" page of the file as shown below: Then, the file "orange.jpg" is not displayed. Instead, there is an error as shown below: Then, as I said before, even if we change "models.ImageField()" after uploading files, we can still display uploaded files. So now, we change back "MEDIA_ROOT" from "os.path.join(BASE_DIR, 'hello/world')" to "os.path.join(BASE_DIR, 'media')": # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # Here MEDIA_URL = '/media/' And, change "models.ImageField(upload_to='images/fruits')" to "models.ImageField(upload_to='hello/world')": # "myapp/models.py" from django.db import models class Image(models.Model): # Here image = models.ImageField(upload_to='hello/world') def __str__(self): return str(self.image) Then again, click on "images/fruits/orange.jpg" on "Change image" page of the file as shown below: Then, the file "orange.jpg" is displayed as shown below: <MEDIA_URL> Next, I explain about "MEDIA_URL". "MEDIA_URL" sets the directory(middle) part of media file URL between the host part and the file part of media file URL as shown below and setting "MEDIA_URL" never ever influence to the absolute path to the directory where uploaded files are stored: Host Directory File | | | <-------------> <----------> <--------> https://www.example.com/media/images/orange.jpg For example, we set '/media/' to "MEDIA_URL": # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' # Here And set no arguments to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): image = models.ImageField() # Here def __str__(self): return str(self.image) Then, upload the file "orange.jpg": Then, go to "Change image" page of the file then click on "orange.jpg": Then, the URL of the file is displayed as shown below: As you can see, the directory part "media" is set between the host part "localhost:8000" and the file part "orange.jpg" Host Directly File | | | <------------> <---> <--------> http://localhost:8000/media/orange.jpg And, this URL below is in this case of "www.example.com" with "https": Host Directly File | | | <-------------> <---> <--------> https://www.example.com/media/orange.jpg And, we can change the directory part of URL even after uploading files. So, just change "MEDIA_URL" from '/media/' to '/images/fruits/' as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/images/fruits/' # Here Then, click on "orange.jpg" again: Then, the directory part "media" is changed to "image/fruits" as shown below: In addition, we can set the directory part of URL with the combination of "MEDIA_URL" and "models.ImageField()". In this case, we can only change the part of the directory part set by "MEDIA_URL" after uploading files while we cannot change the part of the directory part set by "models.ImageField()" after uploading files: For example, we set '/media/' to "MEDIA_URL" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' # Here And add "upload_to='images/fruits'" to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): # Here image = models.ImageField(upload_to='images/fruits') def __str__(self): return str(self.image) Then, upload the file "orange.jpg": Then, go to "Change image" page of the file then click on "images/fruits/orange.jpg": Then, the URL of the file is displayed as shown below: Then, the directory part is: media/images/fruits Now, we change "MEDIA_URL" from '/media/' to '/hello/world/': # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/hello/world/' # Here And, change "models.ImageField(upload_to='images/fruits')" to "models.ImageField(upload_to='hey/earth')": # "myapp/models.py" from django.db import models class Image(models.Model): # Here image = models.ImageField(upload_to='hey/earth') def __str__(self): return str(self.image) Then, click on "images/fruits/orange.jpg" again: Then, the URL of the file is displayed as shown below: Then, we could change the part of the directory part 'media' to 'hello/world' set by "MEDIA_URL" after uploading the file "orange.jpg" while we couldn't change the part of the directory part 'images/fruits' to 'hey/earth' set by "models.ImageField()" after uploading the file "orange.jpg": hello/world/images/fruits In addition, if we don't set "MEDIA_URL" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # MEDIA_URL = '/media/' # Here Or set an empty string to "MEDIA_URL" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '' # Here Or set one or more slashes to "MEDIA_URL" as shown below: # "core/settings.py" MEDIA_ROOT = os.path.join(BASE_DIR) MEDIA_URL = '/////' # Here And set no arguments to "models.ImageField()" as shown below: # "myapp/models.py" from django.db import models class Image(models.Model): image = models.ImageField() # Here def __str__(self): return str(self.image) Then, no directory part is set between the host part "localhost:8000" and the file part "orange.jpg" as shown below: http://localhost:8000/orange.jpg
4
13
72,082,251
2022-5-2
https://stackoverflow.com/questions/72082251/error-in-layer-of-discriminator-model-while-making-a-gan-model
I made a GAN model for generating the images based on sample training images of animes. Where on the execution of the code I got this error. ValueError: Input 0 of layer "discriminator" is incompatible with the layer: expected shape=(None, 64, 64, 3), found shape=(64, 64, 3) Even changing the shape of the 1st layer of the discriminator to (None, 64, 64, 3) did not help Code: Preprocessing: import numpy as np import tensorflow as tf from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers img_h,img_w,img_c=64,64,3 batch_size=128 latent_dim=128 num_epochs=100 dir='/home/samar/Desktop/project2/anime-gan/data' dataset = tf.keras.utils.image_dataset_from_directory( directory=dir, seed=123, image_size=(img_h, img_w), batch_size=batch_size, shuffle=True) xtrain, ytrain = next(iter(dataset)) xtrain=np.array(xtrain) xtrain=np.apply_along_axis(lambda x: x/255.0,0,xtrain) Discriminator model: discriminator = keras.Sequential( [ keras.Input(shape=(64, 64, 3)), layers.Conv2D(64, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Flatten(), layers.Dropout(0.2), layers.Dense(1, activation="sigmoid"), ], name="discriminator", ) discriminator.summary() Generator Model: generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), layers.Dense(8 * 8 * 128), layers.Reshape((8, 8, 128)), layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(256, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(512, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(3, kernel_size=5, padding="same", activation="sigmoid"), ], name="generator", ) generator.summary() Training: opt_gen = keras.optimizers.Adam(1e-4) opt_disc = keras.optimizers.Adam(1e-4) loss_fn = keras.losses.BinaryCrossentropy() for epoch in range(10): for idx, real in enumerate(tqdm(xtrain)): batch_size=real.shape[0] random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) with tf.GradientTape() as gen_tape: fake = generator(random_latent_vectors) if idx % 100 == 0: img = keras.preprocessing.image.array_to_img(fake[0]) img.save("/home/samar/Desktop/project2/anime-gan/gen_images/generated_img_%03d_%d.png" % (epoch, idx)) with tf.GradientTape() as disc_tape: loss_disc_real = loss_fn(tf.ones((batch_size,1)), discriminator(real)) loss_disc_fake = loss_fn(tf.zeros((batch_size,1)), discriminator(fake)) loss_disc = (loss_disc_real + loss_disc_fake) / 2 gradients_of_discriminator = disc_tape.gradient(loss_disc, discriminator.trainable_variables) opt_disc.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) with tf.GradientTape() as gen_tape: fake = generator(random_latent_vectors) output = discriminator(fake) loss_gen = loss_fn(tf.ones(batch_size, 1), output) grads = gen_tape.gradient(loss_gen, generator.trainable_weights) opt_gen.apply_gradients(zip(grads, generator.trainable_weights)) And also can you please explain me the difference between the shapes (None, 64, 64, 3) and (64, 64, 3)
The problem is you are extracting exactly one batch when running xtrain, ytrain = next(iter(train_ds)) and you are then iterating over this batch in your training loop. That is why you are missing the batch dimension (None). I am not sure what your dataset looks like, but here is a working example using tf.keras.utils.image_dataset_from_directory: import numpy as np import tensorflow as tf from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers import pathlib dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True) data_dir = pathlib.Path(data_dir) img_h,img_w,img_c=64,64,3 batch_size=128 latent_dim=128 num_epochs=100 train_ds = tf.keras.utils.image_dataset_from_directory( data_dir, seed=123, image_size=(img_h,img_w), batch_size=batch_size) normalization_layer = tf.keras.layers.Rescaling(1./255) train_ds = train_ds.map(lambda x, y: normalization_layer(x)) discriminator = keras.Sequential( [ keras.Input(shape=(64, 64, 3)), layers.Conv2D(64, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Flatten(), layers.Dropout(0.2), layers.Dense(1, activation="sigmoid"), ], name="discriminator", ) discriminator.summary() generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), layers.Dense(8 * 8 * 128), layers.Reshape((8, 8, 128)), layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(256, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(512, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(3, kernel_size=5, padding="same", activation="sigmoid"), ], name="generator", ) generator.summary() opt_gen = keras.optimizers.Adam(1e-4) opt_disc = keras.optimizers.Adam(1e-4) loss_fn = keras.losses.BinaryCrossentropy() for epoch in range(10): for idx, real in enumerate(tqdm(train_ds)): batch_size=real.shape[0] random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) with tf.GradientTape() as gen_tape: fake = generator(random_latent_vectors) if idx % 100 == 0: img = keras.preprocessing.image.array_to_img(fake[0]) with tf.GradientTape() as disc_tape: loss_disc_real = loss_fn(tf.ones((batch_size,1)), discriminator(real)) loss_disc_fake = loss_fn(tf.zeros((batch_size,1)), discriminator(fake)) loss_disc = (loss_disc_real + loss_disc_fake) / 2 gradients_of_discriminator = disc_tape.gradient(loss_disc, discriminator.trainable_variables) opt_disc.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) with tf.GradientTape() as gen_tape: fake = generator(random_latent_vectors) output = discriminator(fake) loss_gen = loss_fn(tf.ones(batch_size, 1), output) grads = gen_tape.gradient(loss_gen, generator.trainable_weights) opt_gen.apply_gradients(zip(grads, generator.trainable_weights))
4
5
72,063,166
2022-4-29
https://stackoverflow.com/questions/72063166/read-and-group-json-files-by-date-element-using-pyspark
I have multiple JSON files (10 TB ~) on a S3 bucket, and I need to organize these files by a date element present in every json document. What I think that my code needs to do Read all json files in the s3 bucket. Keep all documents which have the element "creation_date" between 2022-01-01 and 2022-04-01 Save them in another bucket in a parquet format. I'm not sure that's the right thing to do, considering the size that I'm dealing it. Here's an example of a json document. Each file has multiple of these documents. { "id": 123456, "creation_date": "2022-01-01T23:35:16", "params": { "doc_info": "AXBD", "return_date": "20/05/2021", "user_name": "XXXXXXXX", "value": "40,00" }, "user_id": "1234567", "type": "TEST" } ] Here's what I already tried on a DB notebook, but in fact, I can't use the code directly on a notebook. I necessarily need to write a spark code and run on an airflow dag, because I don't have write access on the bucket using directly from the notebook. # Trying to read all the json files df_test = spark.read.json("s3://my-bucket/**/**" + "/*.json") # Filtering all documents that has the creation_date period that I want df_test_filter = df_test.filter(F.col("creation_date").between('2022-01-01','2022-04-01')) # Write parquet on another bucket # In this test, I'm saving on a local bucket that I have write access. df_test_filter.write.mode('overwrite').parquet("s3://my-local-test-bucket/") That seems to work fine on a single json file that I use to test, but my questions are: How can I do this without a databricks notebook, and using an airflow dag with pyspark? Thinking in performance issues, there is a better way to do this?
Do you want to run the job only once or do you want to do it periodically? One run What you have should work well # Trying to read all the json files sdf = spark.read.json("s3://my-bucket/**/**/*.json") The only thing I'd add is to partition the output by the date to speed up queries: ( # Filtering all documents that has the creation_date period that I want sdf.filter(F.col("creation_date").between('2022-01-01','2022-04-01')) # Partition by creation date so that's easier to query .partitionBy("creation_date") # Export the data .write.mode('append') .parquet("s3://my-local-test-bucket/") ) Running it periodically Here I wonder what is the file structure is. It is a good idea to have data partitioned by some dates and in this case it looks like you might have the input data partitioned by another date (maybe insert_date?). Assuming that's the case I suggest each day you read that data and then you write it as parquet partitioned by the date you want. This would be done by: # Trying to read all the json files sdf = spark.read.json(f"s3://my-bucket/insert_date={today:%Y-%m-%d}/*/") sdf.partitionBy("creation_date").write.mode('append').parquet("s3://my-local-test-bucket/") And later on you can simply retrive the data you need with: sdf = ( spark.read.json(f"s3://my-bucket/") .where(F.col("creation_date").between('2022-01-01','2022-04-01')) )
5
6
72,103,585
2022-5-3
https://stackoverflow.com/questions/72103585/how-to-pass-file-object-to-httpx-request-in-fastapi-endpoint
The idea is to get file object from one endpoint and send it to other endpoints to work with it without saving it. Let's have this expample code: import httpx from fastapi import Request, UploadFile, File app = FastAPI() client = httpx.AsyncClient() @app.post("/endpoint/") async def foo(request: Request, file: UploadFile = File(...)) urls = ["/some/other/endpoint", "/another/endpoint/"] for url in urls: response = await client.post(url) # here I need to send the file to the other endpoint return {"bar": "baz"} @app.post("/some/other/endpoint/") async def baz(request: Request, file: UploadFile = File(...)): # and here to use it # Do something with the file object return {"file": file.filename} @app.post("/another/endpoint/") async def baz(request: Request, file: UploadFile = File(...)): # and here to use it too # Do something with the file object return {"file": file.content_type} As stated here I tried to do something like this: data = {'file': file} response = await client.post(url, data=data) But it errored with '{"detail":[{"loc":["body","file"],"msg":"Expected UploadFile, received: <class \'str\'>","type":"value_error"}]}' Example curl request: curl -X 'POST' -F 'file=@somefile' someserver/endpoint/
httpx similar to requests uses files=.... to send files. ie. post(..., files={'file': file.file}, ...) or with filename post(..., files={'file': (file.filename, file.file)}, ...) BTW: If you send the same file a few times then you may need to move pointer to the beginning of file after sending file.file.seek(0) or await file.seek(0) Full working code from fastapi import FastAPI, Request, UploadFile, File import httpx app = FastAPI() client = httpx.AsyncClient() @app.post("/endpoint/") async def foo(request: Request, file: UploadFile = File(...)): print('/endpoint/') urls = ["/some/other/endpoint/", "/another/endpoint/"] results = [] for url in urls: response = await client.post('http://localhost:8000' + url, files={'file': (file.filename, file.file)}) #file.file.seek(0) # move back at the beginning of file after sending to other URL await file.seek(0) # move back at the beginning of file after sending to other URL results.append(response) results = [item.text for item in results] print('results:', results) return {"bar": "baz"} @app.post("/some/other/endpoint/") async def baz(request: Request, file: UploadFile = File(...)): print('/some/other/endpoint/') print('filename:', file.filename) print('content_type:', file.content_type) # Do something with the file object return {"file": file.filename} @app.post("/another/endpoint/") async def baz(request: Request, file: UploadFile = File(...)): print('/another/endpoint/') print('filename:', file.filename) print('content_type:', file.content_type) # Do something with the file object return {"file": file.content_type}
6
8
72,041,522
2022-4-28
https://stackoverflow.com/questions/72041522/how-to-add-title-to-the-plot-of-shap-plots-force-with-matplotlib
I want to add some modifications to my force plot (created by shap.plots.force) using Matplotlib, e.g. adding title, using tight layout etc. However, I tried to add title and the title doesn't show up. Any ideas why and how can I add the title using Matplotlib? import numpy as np import shap import matplotlib.pyplot as plt myBaseline=1.5 shap_values_0 = np.array([-1, -4, 3]) test_point_0 = np.array([11, 12, 13]) features_names = ['a1','a2','a3'] shap.plots.force(myBaseline,shap_values_0,test_point_0,features_names,matplotlib = 1) plt.suptitle("This is my title") # It doesn't show up, why? fig = plt.gcf() fig.canvas.draw() plt.close()
The last lines in force_plot are: if show: plt.show() else: return plt.gcf() so, if you set show = False you can get prepared SHAP plot as figure object and customize it to your needs as usual: import shap myBaseline = 1.5 shap_values_0 = np.array([-1, -4, 3]) test_point_0 = np.array([11, 12, 13]) features_names = ["a1", "a2", "a3"] shap.plots.force( myBaseline, shap_values_0, test_point_0, features_names, matplotlib=True, show=False ) plt.title("This is my title", y=1.75) plt.show()
4
6
72,101,566
2022-5-3
https://stackoverflow.com/questions/72101566/should-i-use-a-capital-l-list-for-type-hinting-in-python-3-9
In Python 3.9+ I can write list_of_integers: list[int], but I see senior developers using the older syntax (even in Python 3.9 and 3.10 scripts): from typing import List list_of_integers: List[int] Is this superior for backwards compatibility and explicitness?
When the current version of documentation for typing.List says: Deprecated since version 3.9: builtins.list now supports []. See PEP 585 and Generic Alias Type. It should be considered best practice unless you have a compelling reason not to use it (like what you said about backward compatibility for older versions of Python).
16
9
72,061,965
2022-4-29
https://stackoverflow.com/questions/72061965/create-voronoi-art-with-rounded-region-edges
I'm trying to create some artistic "plots" like the ones below: The color of the regions do not really matter, what I'm trying to achieve is the variable "thickness" of the edges along the Voronoi regions (espescially, how they look like a bigger rounded blob where they meet in corners, and thinner at their middle point). I've tried by "painting manually" each pixel based on the minimum distance to each centroid (each associated with a color): n_centroids = 10 centroids = [(random.randint(0, h), random.randint(0, w)) for _ in range(n_centroids)] colors = np.array([np.random.choice(range(256), size=3) for _ in range(n_centroids)]) / 255 for x, y in it.product(range(h), range(w)): distances = np.sqrt([(x - c[0])**2 + (y - c[1])**2 for c in centroids]) centroid_i = np.argmin(distances) img[x, y] = colors[centroid_i] plt.imshow(img, cmap='gray') Or by scipy.spatial.Voronoi, that also gives me the vertices points, although I still can't see how I can draw a line through them with the desired variable thickness. from scipy.spatial import Voronoi, voronoi_plot_2d # make up data points points = [(random.randint(0, 10), random.randint(0, 10)) for _ in range(10)] # add 4 distant dummy points points = np.append(points, [[999,999], [-999,999], [999,-999], [-999,-999]], axis = 0) # compute Voronoi tesselation vor = Voronoi(points) # plot voronoi_plot_2d(vor) # colorize for region in vor.regions: if not -1 in region: polygon = [vor.vertices[i] for i in region] plt.fill(*zip(*polygon)) # fix the range of axes plt.xlim([-2,12]), plt.ylim([-2,12]) plt.show() Edit: I've managed to get a somewhat satisfying result via erosion + corner smoothing (via median filter as suggested in the comments) on each individual region, then drawing it into a black background. res = np.zeros((h,w,3)) for color in colors: region = (img == color)[:,:,0] region = region.astype(np.uint8) * 255 region = sg.medfilt2d(region, 15) # smooth corners # make edges from eroding regions region = cv2.erode(region, np.ones((3, 3), np.uint8)) region = region.astype(bool) res[region] = color plt.imshow(res) But as you can see the "stretched" line along the boundaries/edges of the regions is not quite there. Any other suggestions?
This is what @JohanC suggestion looks like. IMO, it looks much better than my attempt with Bezier curves. However, there appears to be a small problem with the RoundedPolygon class, as there are sometimes small defects at the corners (e.g. between blue and purple in the image below). Edit: I fixed the RoundedPolygon class. #!/usr/bin/env python # coding: utf-8 """ https://stackoverflow.com/questions/72061965/create-voronoi-art-with-rounded-region-edges """ import numpy as np import matplotlib.pyplot as plt from matplotlib import patches, path from scipy.spatial import Voronoi, voronoi_plot_2d def shrink(polygon, pad): center = np.mean(polygon, axis=0) resized = np.zeros_like(polygon) for ii, point in enumerate(polygon): vector = point - center unit_vector = vector / np.linalg.norm(vector) resized[ii] = point - pad * unit_vector return resized class RoundedPolygon(patches.PathPatch): # https://stackoverflow.com/a/66279687/2912349 def __init__(self, xy, pad, **kwargs): p = path.Path(*self.__round(xy=xy, pad=pad)) super().__init__(path=p, **kwargs) def __round(self, xy, pad): n = len(xy) for i in range(0, n): x0, x1, x2 = np.atleast_1d(xy[i - 1], xy[i], xy[(i + 1) % n]) d01, d12 = x1 - x0, x2 - x1 l01, l12 = np.linalg.norm(d01), np.linalg.norm(d12) u01, u12 = d01 / l01, d12 / l12 x00 = x0 + min(pad, 0.5 * l01) * u01 x01 = x1 - min(pad, 0.5 * l01) * u01 x10 = x1 + min(pad, 0.5 * l12) * u12 x11 = x2 - min(pad, 0.5 * l12) * u12 if i == 0: verts = [x00, x01, x1, x10] else: verts += [x01, x1, x10] codes = [path.Path.MOVETO] + n*[path.Path.LINETO, path.Path.CURVE3, path.Path.CURVE3] verts[0] = verts[-1] return np.atleast_1d(verts, codes) if __name__ == '__main__': # make up data points n = 100 max_x = 20 max_y = 10 points = np.c_[np.random.uniform(0, max_x, size=n), np.random.uniform(0, max_y, size=n)] # add 4 distant dummy points points = np.append(points, [[2 * max_x, 2 * max_y], [ -max_x, 2 * max_y], [2 * max_x, -max_y], [ -max_x, -max_y]], axis = 0) # compute Voronoi tesselation vor = Voronoi(points) fig, ax = plt.subplots(figsize=(max_x, max_y)) for region in vor.regions: if region and (not -1 in region): polygon = np.array([vor.vertices[i] for i in region]) resized = shrink(polygon, 0.15) ax.add_patch(RoundedPolygon(resized, 0.2, color=plt.cm.Reds(0.5 + 0.5*np.random.rand()))) ax.axis([0, max_x, 0, max_y]) ax.axis('off') ax.set_facecolor('black') ax.add_artist(ax.patch) ax.patch.set_zorder(-1) plt.show()
6
5
72,096,495
2022-5-3
https://stackoverflow.com/questions/72096495/how-to-rename-a-column-for-a-dataframe-in-pyspark
below is part code: df = None F_DATE = ['202101', '202102', '202103'] for date in F_DATE: if df is None: df = spark.sql("select count(*) as Total_count from test_" + date) else: df2 = spark.sql("select count(*) as Total_count from test_" + date) df = df.union(df2) df.write.csv('/csvs/test.csv') I tried 'toDF()', 'withColumnRenamed()', and 'selectExpr()', but the column name was not changed. NOTE. Use the table in Hive. ADD I've never used "df.show()" to write code, and I've used "df.show()" to read code. When used "df.show()" in write code, it was confirmed that the column name came out well, and when used "df.show()" in read code, it was confirmed that the column name did not come out properly.
You can use: df = df.withColumnRenamed('old_name', 'new_name')
4
4
72,091,572
2022-5-2
https://stackoverflow.com/questions/72091572/how-to-compute-cross-entropy-loss-for-sequences
I have a sequence continuation/prediction task (input: a sequence of class indices, output: a sequence of class indices) and I use Pytorch. My neural network returns a tensor of shape (batch_size, sequence_length, numb_classes) where the entries are a number proportional to the propability that the class with this index is the next class in the sequence. My targets in the training data are of shape (batch_size, sequence_length) (just the sequences of the real predictions). I want to use the CrossEntropyLoss My question: How do I use the Cross Entropy Loss function? Which input shapes are required? Thank you!
The documentation page of nn.CrossEntropyLoss clearly states: Input: shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K >= 1 in the case of K-dimensional loss. Target: If containing class indices, shape (), (N) or (N, d_1, d_2, ..., d_K) with K >= 1 in the case of K-dimensional loss where each value should be between [0, C). If containing class probabilities, the input and each value should be between [0, 1]. Just to be crystal clear, "input" refers to the output prediction of your model while the "target" is the label tensor. In a nutshell, the target must have one less dimension than that of the input. This missing dimension in the target would contain each class logit value. Usually, we say the target is in the dense format, it only contains the class indices corresponding to the true labels. The example you give corresponds to the use case of: #input = (batch_size, sequence_length, numb_classes) #target = (batch_size, sequence_length) Which is the case of #input = (N, C, d_1) and #target = (N, d_1), i.e;, you need to permute the axes, or tranpose two axes from your input tensor such that it gets a shape of (batch_size, numb_classes, sequence_length) which is (N, C, d_1). You can do so with either torch.Tensor.transpose or torch.Tensor.permute: >>> input.permute(0,2,1) or >>> input.transpose(1,2)
5
2
72,083,896
2022-5-2
https://stackoverflow.com/questions/72083896/how-to-stretch-a-line-to-fit-image-with-python-opencv
I have an image with the size of W * H, I want to draw a line on this and the line should be automatically fit to the image, for example if I draw it: I want this: How can I do this in Python and OpenCV? Thanks
Method #1: Just drawing the extended line (no coordinates) Before -> After Here's a function when given points p1 and p2, will only draw the extended line. By default, the line is clipped by the image boundaries. There is also a distance parameter to determine how far to draw from the original starting point or until the line hits the border of the image. If you need the new (x1, y1) and (x2, y2) coordinates, see section #2 import cv2 import numpy as np """ @param: p1 - starting point (x, y) @param: p2 - ending point (x, y) @param: distance - distance to extend each side of the line """ def extend_line(p1, p2, distance=10000): diff = np.arctan2(p1[1] - p2[1], p1[0] - p2[0]) p3_x = int(p1[0] + distance*np.cos(diff)) p3_y = int(p1[1] + distance*np.sin(diff)) p4_x = int(p1[0] - distance*np.cos(diff)) p4_y = int(p1[1] - distance*np.sin(diff)) return ((p3_x, p3_y), (p4_x, p4_y)) # Create blank black image using Numpy original = np.zeros((500,500,3), dtype=np.uint8) image1 = original.copy() p1 = (250, 100) p2 = (375, 250) cv2.line(image1, p1, p2, [255,255,255], 2) # Second image, calculate new extended points image2 = original.copy() p3, p4 = extend_line(p1, p2) cv2.line(image2, p3, p4, [255,255,255], 2) cv2.imshow('image1', image1) cv2.imshow('image2', image2) cv2.waitKey() Method #2: Full drawing with coordinates If you need the new (x1, y1) and (x2, y2) coordinates, it gets a little more complicated since we need to calculate the resulting new points for each possible case. The possible cases are horizontal, vertical, positively sloped, negatively sloped, and exact diagonals. Here's the result for each of the cases with the new two coordinate points: white is the original line and the green is the extended line Vertical (250, 0) (250, 500) Horizontal (0, 300) (500, 300) Positive slope (0, 450) (450, 0) Negative slope (0, 142) (500, 428) Left corner diagonal (0, 0) (500, 500) Right corner diagonal (0, 0) (500, 500) Code import numpy as np import cv2 import math """ @param: dimensions - image shape from Numpy (h, w, c) @param: p1 - starting point (x1, y1) @param: p2 - ending point (x2, y2) @param: SCALE - default parameter to ensure that extended lines go through borders """ def extend_line(dimensions, p1, p2, SCALE=10): # Calculate the intersection point given (x1, y1) and (x2, y2) def line_intersection(line1, line2): x_diff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0]) y_diff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1]) def detect(a, b): return a[0] * b[1] - a[1] * b[0] div = detect(x_diff, y_diff) if div == 0: raise Exception('lines do not intersect') dist = (detect(*line1), detect(*line2)) x = detect(dist, x_diff) / div y = detect(dist, y_diff) / div return int(x), int(y) x1, x2 = 0, 0 y1, y2 = 0, 0 # Extract w and h regardless of grayscale or BGR image if len(dimensions) == 3: h, w, _ = dimensions elif len(dimensions) == 2: h, w = dimensions # Take longest dimension and use it as maxed out distance if w > h: distance = SCALE * w else: distance = SCALE * h # Reorder smaller X or Y to be the first point # and larger X or Y to be the second point try: slope = (p2[1] - p1[1]) / (p1[0] - p2[0]) # HORIZONTAL or DIAGONAL if p1[0] <= p2[0]: x1, y1 = p1 x2, y2 = p2 else: x1, y1 = p2 x2, y2 = p1 except ZeroDivisionError: # VERTICAL if p1[1] <= p2[1]: x1, y1 = p1 x2, y2 = p2 else: x1, y1 = p2 x2, y2 = p1 # Extend after end-point A length_A = math.sqrt((x2 - x1)**2 + (y2 - y1)**2) p3_x = int(x1 + (x1 - x2) / length_A * distance) p3_y = int(y1 + (y1 - y2) / length_A * distance) # Extend after end-point B length_B = math.sqrt((x1 - x2)**2 + (y1 - y2)**2) p4_x = int(x2 + (x2 - x1) / length_B * distance) p4_y = int(y2 + (y2 - y1) / length_B * distance) # -------------------------------------- # Limit coordinates to borders of image # -------------------------------------- # HORIZONTAL if y1 == y2: if p3_x < 0: p3_x = 0 if p4_x > w: p4_x = w return ((p3_x, p3_y), (p4_x, p4_y)) # VERTICAL elif x1 == x2: if p3_y < 0: p3_y = 0 if p4_y > h: p4_y = h return ((p3_x, p3_y), (p4_x, p4_y)) # DIAGONAL else: A = (p3_x, p3_y) B = (p4_x, p4_y) C = (0, 0) # C-------D D = (w, 0) # |-------| E = (w, h) # |-------| F = (0, h) # F-------E if slope > 0: # 1st point, try C-F side first, if OTB then F-E new_x1, new_y1 = line_intersection((A, B), (C, F)) if new_x1 > w or new_y1 > h: new_x1, new_y1 = line_intersection((A, B), (F, E)) # 2nd point, try C-D side first, if OTB then D-E new_x2, new_y2 = line_intersection((A, B), (C, D)) if new_x2 > w or new_y2 > h: new_x2, new_y2 = line_intersection((A, B), (D, E)) return ((new_x1, new_y1), (new_x2, new_y2)) elif slope < 0: # 1st point, try C-F side first, if OTB then C-D new_x1, new_y1 = line_intersection((A, B), (C, F)) if new_x1 < 0 or new_y1 < 0: new_x1, new_y1 = line_intersection((A, B), (C, D)) # 2nd point, try F-E side first, if OTB then E-D new_x2, new_y2 = line_intersection((A, B), (F, E)) if new_x2 > w or new_y2 > h: new_x2, new_y2 = line_intersection((A, B), (E, D)) return ((new_x1, new_y1), (new_x2, new_y2)) # Vertical # ------------------------------- # p1 = (250, 100) # p2 = (250, 300) # ------------------------------- # Horizontal # ------------------------------- # p1 = (100, 300) # p2 = (400, 300) # ------------------------------- # Positive slope # ------------------------------- # C-F, C-D # p1 = (50, 400) # p2 = (400, 50) # C-F, E-D # p1 = (50, 400) # p2 = (400, 50) # F-E, E-D # p2 = (250, 400) # p1 = (400, 250) # F-E, C-D # p2 = (250, 400) # p1 = (300, 250) # ------------------------------- # Negative slope # ------------------------------- # C-F, E-D # p1 = (100, 200) # p2 = (450, 400) # C-F, F-E # p2 = (100, 200) # p1 = (250, 400) # C-D, D-E # p1 = (100, 50) # p2 = (450, 400) # C-D, F-E p1 = (100, 50) p2 = (250, 400) # ------------------------------- # Exact corner diagonals # ------------------------------- # p1 = (50,50) # p2 = (300, 300) # p2 = (375, 125) # p1 = (125, 375) # ------------------------------- image = np.zeros((500,500,3), dtype=np.uint8) p3, p4 = extend_line(image.shape, p1, p2) print(p3, p4) cv2.line(image, p3, p4, [255,255,255], 2) cv2.line(image, p1, p3, [36,255,12], 2) cv2.line(image, p2, p4, [36,255,12], 2) cv2.imshow('image', image) cv2.waitKey()
6
7
72,092,993
2022-5-2
https://stackoverflow.com/questions/72092993/i-want-to-use-boto3-in-async-function-python
I am developing web-scraper in playwright and want to upload images to aws-s3 asynchronouslly. but boto3 is not an async function.. how to fix it? class Boto3: def __init__(self, key, id): self.S3 = boto3.client('s3', aws_access_key_id=aws_key_id, aws_secret_access_key=aws_secret) def upload_stream(self, stream, bucket_name, key): self.S3.put_object(Body=stream, Bucket=bucket_name, Key=key) ... ... class Scraper: def __init__(self, key, id): self.S3 = boto3.client('s3', aws_access_key_id=id, aws_secret_access_key=key) asnyc _save_image(res): buffer = await res.body() # S3.put_object is not an async function. self.S3.put_object( Body=buffer, Bucket=bucket_name, Key=bucket_key, ) async def scrape(): playwright = await async_playwright().start() browser = await playwright.chromium.launch( headless = True, devtools = False ) page = browser.new_page() page.on('response', _save_image) await page.goto('https://www.example.com') scraper = Scraper(key, id) asyncio.run(scraper.scrape()) self.S3.put_object : this func is not an asnyc so want to change it with in async-version. How to fix it? Thanks in advance.
How to fix it? You can't, as boto3 is not async. At best you can try a third party, non-AWS library, such as aioboto3 in place of boto3.
16
20
72,090,856
2022-5-2
https://stackoverflow.com/questions/72090856/where-is-the-interp-function-in-numpy-core-multiarray-located
The source code for numpy.interp calls a compiled_interp function which is apparently the interp function imported from numpy.core.multiarray. I went looking for this function but I can not find it inside that file. What am I missing?
The interp Python function of numpy.core.multiarray is exported in multiarraymodule.c. It is mapped to arr_interp which is a C function defined in compiled_base.c. The heart of the computation can be found here.
4
4
72,076,793
2022-5-1
https://stackoverflow.com/questions/72076793/membership-for-list-of-arrays-valueerror-the-truth-value-of-an-array-with-more
Q = [np.array([0, 1]), np.array([1, 2]), np.array([2, 3]), np.array([3, 4])] for q in Q: print(q in Q) Running the code above, it gives me the result 'True' at the first iteration, while ValueError comes out afterwards. True ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I have no idea why it starts to go wrong at second iteration. Anybody can help me plz..
Essentially, you can't use in to test for numpy arrays in a Python list. It will only ever work for the first element, because of an optimisation in the way Python tests for equality. What's happening is that the implementation for list.__contains__ (which in defers to), is using a short-cut to find a match faster, by first checking for identity. Most people using Python know this as the is operator. This is faster than == equality checks because all is has to do is see if the pointers of both objects are the same value, it is worth checking for first. An identity test works the same for any Python object, including numpy arrays. The implementation essentially would look like this if it was written in Python: def __contains__(self, needle): for elem in self: if needle is elem or needle == elem: return True return False What happens for your list of numpy arrays then is this: for q in Q, step 1: q = Q[0] q in Q is then the same as Q.__contains__(Q[0]) Q[0] is self[0] => True! for q in Q, step 2: q = Q[1] q in Q is then the same as Q.__contains__(Q[1]) Q[1] is self[0] => False :-( Q[1] == self[0] => array([False, False]), because Numpy arrays use broadcasting to compare each element in both arrays. The array([False, False]) result is not a boolean, but if wants a boolean result, so it is passed to (C equivalent of) the bool() function. bool(array([False, False])) produces the error you see. Or, done manually: >>> import numpy as np >>> Q = [np.array([0, 1]), np.array([1, 2]), np.array([2, 3]), np.array([3, 4])] >>> Q[0] is Q[0] True >>> Q[1] is Q[0] False >>> Q[1] == Q[0] array([False, False]) >>> bool(Q[1] == Q[0]) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() You'll have to use any() and numpy.array_equal() to create a version of list.__contains__ that doesn't use (normal) == equality checks: def list_contains_array(lst, arr): return any(np.array_equal(arr, elem) for elem in lst) and you can then use that to get True for your loop: >>> for q in Q: ... print(list_contains_array(Q, q)) ... True True True True
4
5
72,072,824
2022-4-30
https://stackoverflow.com/questions/72072824/python-how-to-get-enum-value-by-index
I have an Enum of days_of_the week in Python: class days_of_the_week(str, Enum): monday = 'monday' tuesday = 'tuesday' wednesday = 'wednesday' thursday = 'thursday' friday = 'friday' saturday = 'saturday' sunday = 'sunday' I want to access the value using the index. I've tried: days_of_the_week.value[index] days_of_the_week[index].value days_of_the_week.values()[index] and so on... But everything I tried didn't returned me the value of enum (eg. days_of_the_week[1] >>> 'tuesday') Is there a way?
IIUC, you want to do: from enum import Enum class days_of_the_week(Enum): monday = 0 tuesday = 1 wednesday = 2 thursday = 3 friday = 4 saturday = 5 sunday = 6 >>> days_of_the_week(1).name 'tuesday'
11
11
72,071,447
2022-4-30
https://stackoverflow.com/questions/72071447/python-enum-and-pydantic-accept-enum-members-composition
I have an enum : from enum import Enum class MyEnum(Enum): val1 = "val1" val2 = "val2" val3 = "val3" I would like to validate a pydantic field based on that enum. from pydantic import BaseModel class MyModel(BaseModel): my_enum_field: MyEnum BUT I would like this validation to also accept string that are composed by the Enum members. So for example : "val1_val2_val3" or "val1_val3" are valid input. I cannot make this field as a string field with a validator since I use a test library (hypothesis and pydantic-factories) that needs this type in order to render one of the values from the enum (for mocking random inputs) So this : from pydantic import BaseModel, validator class MyModel(BaseModel): my_enum_field: str @validator('my_enum_field', pre=True) def validate_my_enum_field(cls, value): split_val = str(value).split('_') if not all(v in MyEnum._value2member_map_ for v in split_val): raise ValueError() return value Could work, but break my test suites because the field is anymore of enum types. How to keep this field as an Enum type (to make my mock structures still valid) and make pydantic accept composite values in the same time ? So far, I tried to dynamically extend the enum, with no success.
I looked at this a bit further, and I believe something like this could be helpful. You can create a new class to define the property that is a list of enum values. This class can supply a customized validate method and supply a __modify_schema__ to keep the information present about being a string in the json schema. We can define a base class for generic lists of concatenated enums like this: from typing import Generic, TypeVar, Type from enum import Enum T = TypeVar("T", bound=Enum) class ConcatenatedEnum(Generic[T], list[T]): enum_type: Type[T] @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, value: str): return list(map(cls.enum_type, value.split("_"))) @classmethod def __modify_schema__(cls, field_schema: dict): all_values = ', '.join(f"'{ex.value}'" for ex in cls.enum_type) field_schema.update( title=f"Concatenation of {cls.enum_type.__name__} values", description=f"Underscore delimited list of values {all_values}", type="string", ) if "items" in field_schema: del field_schema["items"] In the __modify_schema__ method I also provide a way to generate a description of which values are valid. To use this in your application: class MyEnum(Enum): val1 = "val1" val2 = "val2" val3 = "val3" class MyEnumList(ConcatenatedEnum[MyEnum]): enum_type = MyEnum class MyModel(BaseModel): my_enum_field: MyEnumList Examples Models: print(MyModel.parse_obj({"my_enum_field": "val1"})) print(MyModel.parse_obj({"my_enum_field": "val1_val2"})) my_enum_field=[<MyEnum.val1: 'val1'>] my_enum_field=[<MyEnum.val1: 'val1'>, <MyEnum.val2: 'val2'>] Example Schema: print(json.dumps(MyModel.schema(), indent=2)) { "title": "MyModel", "type": "object", "properties": { "my_enum_field": { "title": "Concatenation of MyEnum values", "description": "Underscore delimited list of values 'val1', 'val2', 'val3'", "type": "string" } }, "required": [ "my_enum_field" ] }
9
7
72,062,001
2022-4-29
https://stackoverflow.com/questions/72062001/remove-everything-of-a-specific-color-with-a-color-variation-tolerance-from-an
I have some text in blue #00a2e8, and some text in black on a PNG image (white background). How to remove everything in blue (including text in blue) on an image with Python PIL or OpenCV, with a certain tolerance for the variations of color? Indeed, every pixel of the text is not perfectly of the same color, there are variations, shades of blue. Here is what I was thinking: convert from RGB to HSV find the Hue h0 for the blue do a Numpy mask for Hue in the interval [h0-10, h0+10] set these pixels to white Before coding this, is there a more standard way to do this with PIL or OpenCV Python? Example PNG file: foo and bar blocks should be removed
Your image has some issues. Firstly, it has a completely superfluous alpha channel which can be ignored. Secondly, the colours around your blues are quite a long way from blue! I used your planned approach and found the removal was pretty poor: #!/usr/bin/env python3 import cv2 import numpy as np # Load image im = cv2.imread('nwP8M.png') # Define lower and upper limits of our blue BlueMin = np.array([90, 200, 200],np.uint8) BlueMax = np.array([100, 255, 255],np.uint8) # Go to HSV colourspace and get mask of blue pixels HSV = cv2.cvtColor(im,cv2.COLOR_BGR2HSV) mask = cv2.inRange(HSV, BlueMin, BlueMax) # Make all pixels in mask white im[mask>0] = [255,255,255] cv2.imwrite('DEBUG-plainMask.png', im) That gives this: If you broaden the range, to get the rough edges, you start to affect the green letters, so instead I dilated the mask so that pixels spatially near the blues are made white as well as pixels chromatically near the blues: # Try dilating (enlarging) mask with 3x3 structuring element SE = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3)) mask = cv2.dilate(mask, kernel, iterations=1) # Make all pixels in mask white im[mask>0] = [255,255,255] cv2.imwrite('result.png', im) That gets you this: You may wish to diddle with the actual values for your other images, but the principle is the same.
4
10
72,059,811
2022-4-29
https://stackoverflow.com/questions/72059811/how-to-create-a-new-dataframe-column-with-a-set-of-nested-if-rules-apply-is-ver
What I need I need to create new columns in pandas dataframes, based on a set of nested if statements. E.g. if city == 'London' and income > 10000: return 'group 1' elif city == 'Manchester' or city == 'Leeds': return 'group 2' elif borrower_age > 50: return 'group 3' else: return 'group 4' This is actually a simplifcation - in most cases I'd need to create something like 10 or more possible outputs, not 4 as above, but you hopefully get the gist. The issue My problem is that I have not found a way to make the code acceptably fast. I understand that, if the choice were binary, I could use something like numpy.where() , but I have not found a way to vectorise the code or anyway to make it fast enough. I suppose I could probably nest a number of np.where statements and that would be faster, but then the code would be harder to read and much more prone to errors. What I have tried I have tried the following: +────────────────────────────────────────────────+──────────────+ | Method | Time (secs) | +────────────────────────────────────────────────+──────────────+ | dataframe.apply | 29 | | dataframe.apply on a numba-optimised function | 31 | | sqlite | 16 | +────────────────────────────────────────────────+──────────────+ "sqlite" means: loading the dataframe into a sqlite in-memory database, creating the new field there, and then exporting back to a dataframe Sqlite is faster but still unacceptably slow: the same thing on a SQL Server running on the same machine takes less than a second. I'd rather not rely on an external SQL server, though, because the code should be able to run even on machines with no access to a SQL server. I also tried to create a numba function which loops through the rows one by one, but I understand that numba doesn't support strings (or at least I couldn't get that to work). Toy example import numpy as np import pandas as pd import sqlite3 import time import numba start = time.time() running_time = pd.Series() n = int(1e6) df1 = pd.DataFrame() df1['income']=np.random.lognormal(0.4,0.4, n) *20e3 df1['loan balance'] = np.maximum(0, np.minimum(30e3, 5e3 * np.random.randn(n) + 20e3 ) ) df1['city'] = np.random.choice(['London','Leeds','Truro','Manchester','Liverpool'] , n ) df1['city'] = df1['city'].astype('|S80') df1['borrower age'] = np.maximum(22, np.minimum(70, 30 * np.random.randn(n) + 30 ) ) df1['# children']=np.random.choice( [0,1,2,3], n, p= [0.4,0.3,0.25,0.05] ) df1['rate'] = np.maximum(0.5e-2, np.minimum(10e-2, 1e-2 * np.random.randn(n) + 4e-2 ) ) running_time['data frame creation'] = time.time() - start conn = sqlite3.connect(":memory:", detect_types = sqlite3.PARSE_DECLTYPES) cur = conn.cursor() df1.to_sql("df1", conn, if_exists ='replace') cur.execute("ALTER TABLE df1 ADD new_field nvarchar(80)") cur.execute('''UPDATE df1 SET new_field = case when city = 'London' AND income > 10000 then 'group 1' when city = 'Manchester' or city = 'Leeds' then 'group 2' when 'borrower age' > 50 then 'group 3' else 'group 4' end ''') df_from_sql = pd.read_sql('select * from df1', conn) running_time['sql lite'] = time.time() - start def my_func(city, income, borrower_age): if city == 'London' and income > 10000: return 'group 1' elif city == 'Manchester' or city == 'Leeds': return 'group 2' elif borrower_age > 50: return 'group 3' else: return 'group 4' df1['new_field'] = df1.apply( lambda x: my_func( x['city'], x['income'], x['borrower age'] ) , axis =1) running_time['data frame apply'] = time.time() - start @numba.jit(nopython = True) def my_func_numba_apply(city, income, borrower_age): if city == 'London' and income > 10000: return 'group 1' elif city == 'Manchester' or city == 'Leeds': return 'group 2' elif borrower_age > 50: return 'group 3' else: return 'group 4' df1['new_field numba_apply'] = df1.apply( lambda x: my_func_numba_apply( x['city'], x['income'], x['borrower age'] ) , axis =1) running_time['data frame numba'] = time.time() - start x = np.concatenate(([0], running_time)) execution_time = pd.Series(np.diff(x) , running_time.index) print(execution_time) Other questions I have found I have found a number of other questions, but none which directly addresses my point. Most other questions were either easily to vectorise (e.g. just two choices, so np.where works well) or they recommended a numba-based solution, which in my case actually happens to be slower. E.g. Speed up applying function to a list of pandas dataframes This one with dates, so not really applicable How to speed up apply method with lambda in pandas with datetime Joins, so not really applicable speed up pandas apply or using map
Try with numpy.select: conditions = [df["city"].eq("London") & df["income"].gt(10000), df["city"].isin(["Manchester", "Leeds"]), df["borrower_age"].gt(50)] choices = ["Group 1", "Group 2", "Group 3"] df["New Field"] = np.select(conditions, choices, "Group 4") Or have the conditions as a dictionary and use that in the np.select: conditions = {"Group 1": df1["city"].eq("London") & df1["income"].gt(10000), "Group 2": df1["city"].isin(["Manchester", "Leeds"]), "Group 3": df1["borrower age"].gt(50)} df["New Field"] = np.select(conditions.values(), conditions.keys(), "Group 4")
4
4
72,062,542
2022-4-29
https://stackoverflow.com/questions/72062542/is-there-a-way-to-filter-out-items-from-relatedmanager-in-a-modelviewset
I'm using DRF for a simple API, and I was wondering if there's a way to achieve this behavior: I've got two models similar to the following: class Table(models.Model): name = models.CharField(max_length=100) ... class Column(models.Model): original_name = models.CharField(max_length=100) name = models.CharField(max_length=100, blank=True, null=True) ... table = models.ForeignKey(Table, on_delete=models.CASCADE, related_name="columns") And their serializers as follows: class ColumnSerializer(serializers.HyperlinkedModelSerializer): table = serializers.HyperlinkedRelatedField( read_only=True, view_name="table-detail" ) class Meta: model = Column fields = ["url", "name", "table"] class TableSerializer(serializers.HyperlinkedModelSerializer): dataset = serializers.HyperlinkedRelatedField( read_only=True, view_name="dataset-detail" ) tags = serializers.SlugRelatedField( many=True, slug_field="name", queryset=Tag.objects.all() ) columns = ColumnSerializer(many=True, read_only=True) class Meta: model = Table fields = [ "url", "name", ... "columns", ] This returns me an output similar to { ... "results": [ { "url": "http://0.0.0.0:8001/api/tables/1/", "name": "some-name", "columns": [ { "url": "http://0.0.0.0:8001/api/columns/1/", "name": "id", "table": "http://0.0.0.0:8001/api/tables/1/" }, ... } which is totally fine. But what I'd really want to do is, if a Column has name=None, it's filtered out from every API ViewSet. I've managed to do it on the ColumnViewSet by doing queryset = queryset.filter(name__isnull=False), but I can't do it for the TableViewSet or others that might show a Column list. I've tried tinkering with the ColumnSerializer, but the best I could get from it was to show nulls on the Column list. I wonder if there's a way of hiding those. EDIT 1: Adding my ViewSets class TableViewSet(viewsets.ModelViewSet): serializer_class = TableSerializer def get_queryset(self): queryset = Table.objects.all().order_by("name") # some query_params filtering return queryset class ColumnViewSet(viewsets.ModelViewSet): serializer_class = ColumnSerializer def get_queryset(self): queryset = Column.objects.all().order_by("id") queryset = queryset.filter(name__isnull=False) # some query_params filtering return queryset
You can work with a Prefetch object [Django-doc] to filter the related object collection, so: from django.db.models import Prefetch class TableViewSet(viewsets.ModelViewSet): serializer_class = TableSerializer def get_queryset(self): queryset = Table.objects.prefetch_related( Prefetch('columns', Column.objects.filter(name__isnull=False)) ).order_by('name') # some query_params filtering return queryset
4
5
72,059,380
2022-4-29
https://stackoverflow.com/questions/72059380/python-fuctional-style-iterative-algoritm
In Haskell there is a simple list function available iterate :: (a -> a) -> a -> [a] iterate f x = x : iterate f (f x) In python it could be implemented as following: def iterate(f, init): while True: yield init init = f(init) I was kinda surprised that something basic like this is not part of the functools/itertools modules. Could it be simply costructed in functional style (i.e. without the loop) using the tools provided in these libraries? (Mostly code golf, trying to learn about functional style in Python.)
You can do it using some of the functions in itertools: from itertools import accumulate, repeat def iterate(func, initial): return accumulate(repeat(None), func=lambda tot, _: func(tot), initial=initial) Although it's clearly not very clean. Itertools is missing some fundamental functions for constructing streams, like unfoldr. Most of the itertools functions could be defined in terms of unfoldr, as it happens, but functional programming is a little uncomfortable in Python anyways so that might not be much of a benefit.
11
9
72,051,076
2022-4-28
https://stackoverflow.com/questions/72051076/rotating-qr-code-to-the-correct-position-using-python-opencv
I'm a beginner in python and currently studying QR code detection and decoding. I'm having a hard time rotating the detected QR code to the right position. I already used minAreaRect() to rotate my QR code but it doesn't work. Is there any workaround or a right way to do this? thanks! ROI2 = cv2.imread('ROI.png') gray2 = cv2.cvtColor(ROI2, cv2.COLOR_BGR2GRAY) blur2 = cv2.GaussianBlur(gray2, (9, 9), 0) thresh2 = cv2.threshold(blur2, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Morph close # kernel2 = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)) # close2 = cv2.morphologyEx(thresh2, cv2.MORPH_CLOSE, kernel2, iterations=10) # Find contours and filter for QR code cnts2 = cv2.findContours(thresh2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts2 = cnts2[0] if len(cnts2) == 2 else cnts2[1] c = sorted(cnts2, key=cv2.contourArea, reverse=True)[0] draw = cv2.cvtColor(thresh2, cv2.COLOR_GRAY2BGR) cv2.drawContours(draw, [c], 0, (0, 255, 0), 2) rotrect = cv2.minAreaRect(c) box = cv2.boxPoints(rotrect) box = numpy.int0(box) cv2.drawContours(draw, [box], 0, (0, 0, 255), 2) cv2.imshow('thresh', thresh2) cv2.imshow('ROI', ROI2) cv2.imshow('minarearect', draw)
From my understanding, you're trying to deskew an image. To do this, we need to first compute the rotated bounding box angle then perform a linear transformation. The idea is to use cv2.minAreaRect + cv2.warpAffine. According to the documentation, cv2.minAreaRect returns (center(x, y), (width, height), angle of rotation) = cv2.minAreaRect(...) The third parameter gives us the angle we need to deskew the image. Input image -> Output result Skew angle: -39.99416732788086 Code import cv2 import numpy as np # Load image, grayscale, Otsu's threshold image = cv2.imread('2.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) gray = 255 - gray thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] # Compute rotated bounding box coords = np.column_stack(np.where(thresh > 0)) angle = cv2.minAreaRect(coords)[-1] if angle < -45: angle = -(90 + angle) else: angle = -angle print("Skew angle: ", angle) # Rotate image to deskew (h, w) = image.shape[:2] center = (w // 2, h // 2) M = cv2.getRotationMatrix2D(center, angle, 1.0) rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE) cv2.imshow('rotated', rotated) cv2.waitKey() Note: See Python OpenCV skew correction for another approach using the Projection Profile Method to correct skew.
5
6
72,050,038
2022-4-28
https://stackoverflow.com/questions/72050038/use-numpy-to-apply-a-fixed-palette-to-an-image
I have a NumPy image in RGB bytes, let's say it's this 2x3 image: img = np.array([[[ 0, 255, 0], [255, 255, 255]], [[255, 0, 255], [ 0, 255, 255]], [[255, 0, 255], [ 0, 0, 0]]]) I also have a palette that covers every color used in the image. Let's say it's this palette: palette = np.array([[255, 0, 255], [ 0, 255, 0], [ 0, 255, 255], [ 0, 0, 0], [255, 255, 255]]) Is there some combination of indexing the image against the palette (or vice versa) that will give me a paletted image equivalent to this? img_p = np.array([[1, 4], [0, 2], [0, 3]]) For comparison, I know the reverse is pretty simple. palette[img_p] will give a result equivalent to img. I'm trying to figure out if there's a similar approach in the opposite direction that will let NumPy do all the heavy lifting. I know I can just iterate over all the image pixels individually and build my own paletted image. I'm hoping there's a more elegant option. Okay, so I implemented the various solutions below and ran them over a moderate test set: 20 images, each one 2000x2000 pixels, with a 32-element palette of three-byte colors. Pixels were given random palette indexes. All algorithms were run over the same images. Timing results: mostly empty lookup array - 0.89 seconds np.searchsorted approach - 3.20 seconds Pandas lookup, single integer - 38.7 seconds Using == and then aggregating the boolean results - 66.4 seconds inverting the palette into a dict and using np.apply_along_axis() - Probably ~500 seconds, based on a smaller test set Pandas lookup with a MultiIndex - Probably ~3000 seconds, based on a smaller test set Given that the lookup array has a significant memory penalty (and a prohibitive one if there's an alpha channel), I'm going to go with the np.searchsorted approach. The lookup array is significantly faster if you want to spend the RAM on it.
Edit Here is a faster way that uses np.searchsorted. def rev_lookup_by_sort(img, palette): M = (1 + palette.max())**np.arange(3) p1d, ix = np.unique(palette @ M, return_index=True) return ix[np.searchsorted(p1d, img @ M)] Correctness (by equivalence to rev_lookup_by_dict() in the original answer below): np.array_equal( rev_lookup_by_sort(img, palette), rev_lookup_by_dict(img, palette), ) Speedup (for a 1000 x 1000 image and a 1000 colors palette): orig = %timeit -o rev_lookup_by_dict(img, palette) # 2.47 s ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) v2 = %timeit -o rev_lookup_by_sort(img, palette) # 71.8 ms ± 93.7 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) >>> orig.average / v2.average 34.46 So that answer using np.searchsorted is 30x faster at that size. Original answer An initial shot gives a slowish version (hopefully we can do better). It uses a dict, where keys are colors as tuples. def rev_lookup_by_dict(img, palette): d = {tuple(v): k for k, v in enumerate(palette)} def func(pix): return d.get(tuple(pix), -1) return np.apply_along_axis(func, -1, img) img_p = rev_lookup_by_dict(img, palette) Notice that "color not found" is expressed as -1 in img_p. On your (modified) data: >>> img_p array([[1, 4], [0, 2], [0, 3]]) Larger example: # setup from math import isqrt w, h = 1000, 1000 s = isqrt(w * h) palette = np.random.randint(0, 256, (s, 3)) img = palette[np.random.randint(0, s, (w, h))] Test: img_p = rev_lookup_by_dict(img, palette) >>> np.array_equal(palette[img_p], img) True Timing: %timeit rev_lookup_by_dict(img, palette) # 2.48 s ± 16.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) That's quite awful, but hopefully we can do better.
4
3
72,050,177
2022-4-28
https://stackoverflow.com/questions/72050177/futurewarning-dropping-of-nuisance-columns-in-dataframe-reductions-warning-wh
I have a dataframe that looks something like this: col1 col2 col3 0 1 True abc 1 2 False def 2 3 True ghi When I run df.mean(), it shows a warning: >>> df.mean() <stdin>:1: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction. col1 2.000000 col2 0.666667 dtype: float64 How do I solve this warning?
Numeric functions such as mean, median, sem, skew, etc., only support dealing with numeric values. If you look at the data types of your columns... >>> df.dtypes col1 int64 col2 bool col3 object dtype: object ...you can see that the dtype of col1 is int64, which mean can handle, because it's numeric. Likewise, the dtype of col2 is bool, which Python, pandas, and numpy essentially treat as ints, so mean treats col2 as if it only contains 1 (for True) and 0 for False. The dtype of col3, however, is object, the default dtype for strings, which is basically a generic type to encapsulate any type of data that pandas can't understand. Since it's not numeric, mean has no idea how to deal with it. (After all, how would you compute the mean of abc and def?) There are a few ways to solve this problem, but "ignoring it" isn't one of them, because, as the warning indicates, in a future version of pandas, this warning will become an error that will stop your code from running. Use numeric_only=True. This will cause mean to skip columns that aren't numeric — col3 in this case: >>> df.mean(numeric_only=True) col1 2.000000 col2 0.666667 dtype: float64 (Notice how col3 is omitted). Select only the columns you need to operate on: >>> df[['col1', 'col2']].mean() col1 2.000000 col2 0.666667 dtype: float64
6
7
72,044,305
2022-4-28
https://stackoverflow.com/questions/72044305/regex-on-bytes-in-python
I would like to extract 10.00ML in following byte: b'\x0200S10.00ML\x03' So I've tried extracting the 10.00ML between 200S and \x03: result = re.search(b'200S(.*)x03', b'\x0200S10.00ML\x03') which didn't work, no element was found: AttributeError: 'NoneType' object has no attribute 'group' Using only strings I have a minimum working example: test_string = 'a3223b' result = re.search('a(.*)b', test_string) print(result.group(1))
You can use import re text = b'\x0200S10.00ML\x03' m = re.search(rb'\x0200S(.*?)\x03', text, re.S) if m: print( m.group(1).decode('utf-8') ) # => 10.00ML Note that \x02 and \x03 are START OF HEADING and START OF TEXT control chars, so you cannot match them as literal text.
4
5
72,039,810
2022-4-28
https://stackoverflow.com/questions/72039810/python-3-10-optional-parameter-type-union-vs-none-default
It's not a huge issue, but as a matter of style, I was wondering about the best way to indicate optional function parameters... Before type hints, it was like this for parameter b: def my_func(a, b = None) Before Python 3.10, using type hints: def my_func(a, b: Optional[str]) With Python 3.10's lovely pipe-type notation (see PEP 604): def my_func(a, b: str | None) The latter seems the obvious choice of the three options, but I was wondering if this completely eliminates the need to specify the default None value, which would be: def my_func(a, b: str | None = None) EDIT: Thanks to @deceze and @jonrsharpe for pointing out that def my_func(a, b: str | None) would still require you to pass a value to b: you would explicitly have to pass None if you wanted that. So, the most concise one that will work to ensure that b is optional (i.e. the caller does not have to pass a value at all) is: def my_func(a, b: str = None) Stylistically, incorporating explicit typing, is def my_func(a, b: str | None = None), i.e. explicit optional typing plus default None value, ever an option?
Per PEP-484: A past version of this PEP allowed type checkers to assume an optional type when the default value is None, as in this code: def handle_employee(e: Employee = None): ... This is no longer the recommended behavior. Type checkers should move towards requiring the optional type to be made explicit. So the official answer is no. You should specify ... | None = None for optional arguments.
7
12
72,038,297
2022-4-28
https://stackoverflow.com/questions/72038297/find-positive-and-negative-bin-limits-based-on-multiple-other-columns
I have a dataframe like as shown below ID raw_val var_name constant s_value 1 388 Qty 0.36 -0.032 2 120 Qty 0.36 -0.007 3 34 Qty 0.36 0.16 4 45 Qty 0.36 0.31 1 110 F1 0.36 -0.232 2 1000 F1 0.36 -0.17 3 318 F1 0.36 0.26 4 419 F1 0.36 0.31 My objective is to a) Find the upper and lower limits (of raw_val) for each value of var_name for s_value >=0 b) Find the upper and lower limits (of raw_val) for each value of var_name for s_value <0 I tried the below df['sign'] = np.where[df['s_value']<0, 'neg', 'pos'] s = df.groupby(['var_name','sign'])['raw_val'].series df['buckets'] = pd.IntervalIndex.from_arrays(s) Please note that my real data is big data and has more than 200 unique values for var_name column. The distribution of positive and negative values (s_value) may be uneven for each value of the var_name columns. In sample df, I have shown even distribution of pos and neg values but it may not be the case in real life. I expect my output to be like as below var_name sign low_limit upp_limit Qty neg 120 388 F1 neg 110 1000 Qty pos 34 45 Qty pos 318 419
I think numpy.where with aggregate minimal and maximal values is way: df['sign'] = np.where(df['s_value']<0, 'neg', 'pos') df1 = (df.groupby(['var_name','sign'], sort=False, as_index=False) .agg(low_limit=('raw_val','min'), upp_limit=('raw_val','max'))) print (df1) var_name sign low_limit upp_limit 0 Qty neg 120 388 1 Qty pos 34 45 2 F1 neg 110 1000 3 F1 pos 318 419
4
4
72,034,176
2022-4-27
https://stackoverflow.com/questions/72034176/adjust-the-size-of-the-text-label-in-plotly
I'm trying to adjust the text size according to country size, so the text will be inside the boarders of the country. Here's my code: # imports import pandas as pd import plotly.express as px # uploading file df=pd.read_csv('regional-be-daily-latest.csv', header = 1) # creating figure fig = px.choropleth(df, locations='Code', color='Track Name') fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0}) fig.add_scattergeo( locations = df['Code'], text = df['Track Name'], mode = 'text', ) fig.show() The output visualization that my code gives me is: The text label for the orange country is inside the boarders of the country but the text to label the blue country is bigger than the country itself. What I'm looking for would be to adjust the size so it will not exceed the boarders of the country. How can I do this?
You can set the font size using the update_layout function and specifying the font's size by passing the dictionary in the font parameter. import pandas as pd import plotly.express as px df=pd.read_csv('regional-be-daily-latest.csv', header = 1) fig = px.choropleth(df, locations='Code', color='Track Name') fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0}) fig.add_scattergeo( locations = df['Code'], text = df['Track Name'], mode = 'text', ) fig.update_layout( font=dict( family="Courier New, monospace", size=18, # Set the font size here color="RebeccaPurple" ) ) fig.show()
16
21
72,033,491
2022-4-27
https://stackoverflow.com/questions/72033491/how-to-make-python-module-yt-dlp-ignore-private-videos-when-downloading-a-playli
I'm Downloading a Playlist which has some hidden Videos so python gives me DownloadError, I want to Download the Whole Playlist at once. Is there a fix for that. I'm trying to see if I can make it ignore those hidden videos My Code: from yt_dlp import YoutubeDL url = 'https://www.youtube.com/playlist?list=PLzMXToX8KzqhKrURIhVTJMb0v-HeDM3gs' ydl_opts = {'format': 'mp4'} with YoutubeDL(ydl_opts) as ydl: ydl.download(url) Error Given in Terminal: Enter your URL: https://youtube.com/playlist?list=PLzMXToX8KzqhKrURIhVTJMb0v-HeDM3gs [youtube:tab] PLzMXToX8KzqhKrURIhVTJMb0v-HeDM3gs: Downloading webpage WARNING: [youtube:tab] YouTube said: INFO - 8 unavailable videos are hidden [youtube:tab] PLzMXToX8KzqhKrURIhVTJMb0v-HeDM3gs: Downloading API JSON with unavailable videos WARNING: [youtube:tab] YouTube said: INFO - Unavailable videos will be hidden during playback [download] Downloading playlist: English Grammar [youtube:tab] playlist English Grammar: Downloading 52 videos [download] Downloading video 1 of 52 [youtube] JGXK_99nc5s: Downloading webpage [youtube] JGXK_99nc5s: Downloading android player API JSON ERROR: [youtube] JGXK_99nc5s: Private video. Sign in if you've been granted access to this video
Based on my understanding of the documentation, I think this will do what you want - unfortunately I cannot test it at the moment, so let me know if it doesn't work: import yt_dlp ydl_opts = { 'ignoreerrors': True } url = 'https://www.youtube.com/playlist?list=PLzMXToX8KzqhKrURIhVTJMb0v-HeDM3gs' with yt_dlp.YoutubeDL(ydl_opts) as ydl: error_code = ydl.download(url)
5
3
72,031,814
2022-4-27
https://stackoverflow.com/questions/72031814/set-a-class-attribute-in-pytest-fixture
I'm making a test class for pytest, I want to set a class attribute a that will be used for several test methods. To do so, I used a fixture set_a, which is launched automatically autouse=True, and invoked only once for the class (scope='class'), because setting a is costly. Here is my code: import pytest import time class Test: @pytest.fixture(scope='class', autouse=True) def set_a(self): print('Setting a...') time.sleep(5) self.a = 1 def test_1(self): print('TEST 1') assert self.a == 1 But the test fails with the following error: ========================================================================= FAILURES ========================================================================== ________________________________________________________________________ Test.test_1 ________________________________________________________________________ self = <tests.test_file.Test object at 0x116d953a0> def test_1(self): print('TEST 1') > assert self.a == 1 E AttributeError: 'Test' object has no attribute 'a' tests/test_file.py:15: AttributeError ------------------------------------------------------------------- Captured stdout setup ------------------------------------------------------------------- Setting a... ------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------- TEST 1 It looks like a wasn't set even if set_a was invoked, like if a new instance of the class was created when the test is executed. It works well if I change the fixture scope to function, but I don't wan't to set a for each test. Any idea what's the problem here ?
You shouldn’t set the scope since you are already in the class. class Test: @pytest.fixture(autouse=True) def set_a(self): print("Setting a...") time.sleep(5) self.a = 1 def test_1(self): print("TEST 1") assert self.a == 1 This is how you should use the scope=class, meaning it will work for any class in your module: @pytest.fixture(scope="class", autouse=True) def a(request): print("Setting a...") time.sleep(5) request.cls.a = 1 class Test: def test_1(self): print("TEST 1") assert self.a == 1
6
1
72,025,924
2022-4-27
https://stackoverflow.com/questions/72025924/key-shortcut-for-running-python-file-in-vs-code
In VS Code, I'm writing python code. I was wondering if there is a key shortcut to run the file instead of pressing the run button in the right top corner of the screen constantly.
You can press Ctrl + F5 to run the file. If you want to debug the file, use F5 instead.
5
3
72,025,278
2022-4-27
https://stackoverflow.com/questions/72025278/how-to-slice-a-nested-list-twice
With a nested list like: ex_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] I need to be able to slice this list for: [[1, 2], [4, 5]] I've been trying: list(ex_list[:2][:2]) but this isn't working. I'm obviously doing something very wrong but haven't been able to find a solution as using commas doesn't work either for some reason.
You should try using comprehension: Try: [i[:2] for i in ex_list[:2]] Code: ex_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] print([i[:2] for i in ex_list[:2]]) Output: [[1, 2], [4, 5]]
4
4
72,022,176
2022-4-27
https://stackoverflow.com/questions/72022176/warning-cant-open-read-file-check-file-path-integrity
images_per_class = 80 fixed_size = tuple((500, 500)) train_path = "dataset/train" train_labels = os.listdir(train_path) for training_name in train_labels: dir = os.path.join(train_path, training_name) current_label = training_name for x in range(1,images_per_class+1): # get the image file name file = dir + "/" + str(x) + ".jpg" # read the image and resize it to a fixed-size image = cv2.imread(file) image = cv2.resize(image, fixed_size) when i run this code it appeare this error error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:4052: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' and this warning [ WARN:[email protected]] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('dataset/train\Apple/1.jpg'): can't open/read file: check file path/integrity i dont have probleme with installation of opencv because i use it before andwith another code it fonctionne any help please
file = dir + "/" + str(x) + ".jpg" try to replace this line with : file = dir + "\" + str(x) + ".jpg" the / not correct the correct is \
10
0
71,968,447
2022-4-22
https://stackoverflow.com/questions/71968447/python-typing-copy-kwargs-from-one-function-to-another
It is common pattern in Python extend or wrap functions and use **kwargs to pass all keyword arguments to the extended function. i.e. take class A: def bar(self, *, a: int, b: str, c: float) -> str: return f"{a}_{b}_{c}" class B(A): def bar(self, **kwargs): return f"NEW_{super().bar(**kwargs)}" def base_function(*, a: int, b: str, c: float) -> str: return f"{a}_{b}_{c}" def extension(**kwargs) -> str: return f"NEW_{base_function(**kwargs)}" Now calling extension(not_existing="a") or B().bar(not_existing="a") would lead to a TypeError, that could be detected by static type checkers. How can I annotate my extension or B.bar in order to detect this problem before I run my code? This annotation would be also helpful for IDE's to give me the correct suggestions for extension or B.bar.
Solution Update: There is currently a CPython PR open to include the following solution into the standard library. PEP 612 introduced the ParamSpec (see Documentation) Type. We can exploit this to generate a decorator that tells our type checker, that the decorated functions has the same arguments as the given function: from typing import ( Callable, ParamSpec, TypeVar, cast, Any, Type, Literal, ) # Define some specification, see documentation P = ParamSpec("P") T = TypeVar("T") # For a help about decorator with parameters see # https://stackoverflow.com/questions/5929107/decorators-with-parameters def copy_kwargs( kwargs_call: Callable[P, Any] ) -> Callable[[Callable[..., T]], Callable[P, T]]: """Decorator does nothing but returning the casted original function""" def return_func(func: Callable[..., T]) -> Callable[P, T]: return cast(Callable[P, T], func) return return_func This will define a decorator than can be used to copy the complete ParameterSpec definition to our new function, keeping it's return value. Let's test it (see also MyPy Playground) # Our test function for kwargs def source_func(foo: str, bar: int, default: bool = True) -> str: if not default: return "Not Default!" return f"{foo}_{bar}" @copy_kwargs(source_func) def kwargs_test(**kwargs) -> float: print(source_func(**kwargs)) return 1.2 # define some expected return values okay: float broken_kwargs: float broken_return: str okay = kwargs_test(foo="a", bar=1) broken_kwargs = kwargs_test(foo=1, bar="2") broken_return = kwargs_test(foo="a", bar=1) This works as expected with pyre 1.1.310, mypy 1.2.0 and PyCharm 2023.1.1. All three will complain about about the broken kwargs and broken return value. Only PyCharm has troubles to detect the default argument, as PEP 612 support is not yet fully implemented. ⚠️ Limitations Still we need to by very careful how to apply this function. Assume the following call runtime_error = kwargs_test("a", 1) Will lead the runtime error “kwargs_test1() takes 0 positional arguments but 2 were given” without any type checker complaining. So if you copy **kwargs like this, ensure that you put all positional arguments into your function. The function in which the parameters are defined should use keyword only arguments. So a best practise source_func would look like this: def source_func(*, foo: str, bar: int, default: bool = True) -> str: if not default: return "Not Default!" return f"{foo}_{bar}" But as this is probably often used on library functions, we not always have control about the source_func, so keep this problem in mind! You also could add *args to your target function to prevent this problem: # Our test function for args and kwargs def source_func_a( a: Literal["a"], b: Literal["b"], c: Literal["c"], d: Literal["d"], default: bool =True ) -> str: if not default: return "Not Default!" return f"{a}_{b}_{c};{d}" @copy_kwargs(source_func_a) def args_test(a: Literal["a"], *args, c: Literal["c"], **kwargs) -> float: kwargs["c"] = c # Note the correct types of source_func are not checked for kwargs and args, # if args_test doesn't define them (at least for mypy) print(source_func(a, *args, **kwargs)) return 1.2 # define some expected return values okay_args: float okay_kwargs: float broken_kwargs: float broken_args: float okay_args = args_test("a", "b", "c", "d") okay_kwargs = args_test(a="a", b="b", c="c", d="d") borken_args = args_test("not", "not", "not", "not") broken_kwargs = args_test(a="not", b="not", c="not", d="not") History of the PEP 612 Introduction for MyPy and PyCharm MyPy and PyCharm had issues using ParamSpec when creating this answer. The issues seems to be resolved but the links are kept as historical reference: MyPy merged a first implementation for ParamSpec on 7th April 2022 According to the related typedshed Issue, PyCharm should support ParamSpec but did not correctly detect the copied **kwargs but complained that okay = kwargs_test(foo="a", bar=1) would have invalid arguments. (Fixed now) Mypy: Allow using TypedDict for more precise typing of **kwds Mypy: PEP 612 tracking issue Pyright Prototype support for typed **kwargs Using Concatenate If you want to copy the kwargs but also want to allow additional parameters you need to adopt kwargs with Concanate: from typing import Concatenate def copy_kwargs_with_int( kwargs_call: Callable[P, Any] ) -> Callable[[Callable[..., T]], Callable[Concatenate[int, P], T]]: """Decorator does nothing but returning the casted original function""" def return_func(func: Callable[..., T]) -> Callable[Concatenate[int, P], T]: return cast(Callable[Concatenate[int, P], T], func) return return_func @copy_kwargs_with_int(source_func) def something(first: int, *args, **kwargs) -> str: print(f"Yeah {first}") return str(source_func(*args, **kwargs)) something("a", "string", 3) # error: Argument 1 to "something" has incompatible type "str"; expected "int" [arg-type] okay_call: str okay_call = something(3, "string", 3) # okay See MyPy Play for details. Note: Currently you need to define the a decorator for each variable you want to add and due to the nature of Concanate they can also just be added as args in front.
11
19
71,961,686
2022-4-21
https://stackoverflow.com/questions/71961686/avoiding-circular-imports-with-type-annotations-in-situations-where-future-a
When I have the following minimum reproducing code: start.py from __future__ import annotations import a a.py from __future__ import annotations from typing import Text import b Foo = Text b.py from __future__ import annotations import a FooType = a.Foo I get the following error: soot@soot:~/code/soot/experimental/amol/typeddict-circular-import$ python3 start.py Traceback (most recent call last): File "start.py", line 3, in <module> import a File "/home/soot/code/soot/experimental/amol/typeddict-circular-import/a.py", line 5, in <module> import b File "/home/soot/code/soot/experimental/amol/typeddict-circular-import/b.py", line 5, in <module> FooType = a.Foo AttributeError: partially initialized module 'a' has no attribute 'Foo' (most likely due to a circular import) I included __future__.annotations because most qa of this sort is resolved by simply including the future import at the top of the file. However, the annotations import does not improve the situation here because simply converting the types to text (as the annotations import does) doesn't actually resolve the import order dependency. More broadly, this seems like an issue whenever you want to create composite types from multiple (potentially circular) sources, e.g. CompositeType = Union[a.Foo, b.Bar, c.Baz] What are the available options to resolve this issue? Is there any other way to 'lift' the type annotations so they are all evaluated after everything is imported?
In most cases using typing.TYPE_CHECKING should be enough to resolve circular import issues related to use in annotations. Note annotations future-import (details), alternatively you can enclose all names not available at runtime (imported under if TYPE_CHECKING) in quotes. # a.py from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: from b import B class A: pass def foo(b: B) -> None: pass # b.py from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: from a import A class B: pass def bar(a: A) -> None: pass # __main__.py from a import A from b import B However, for exactly your MRE it won't work. If the circular dependency is introduced not only by type annotations (e.g. your type aliases), the resolving may become really tricky. If you don't need Foo available at runtime in your example, it can be declared in if TYPE_CHECKING: block too, mypy will interpret that properly. If it is for runtime too, then everything depends on exact code structure (in your MRE dropping import b is enough). Union type can be declared in separate file that imports a, b and c and creates Union. If you need this union in a, b or c, then things are a bit more complicated, probably some functionality needs to be extracted into separate file d that creates union and uses it (also the code will be a bit cleaner this way, because every file will contain only common functionality).
5
10
72,005,302
2022-4-25
https://stackoverflow.com/questions/72005302/completely-uninstall-python-3-on-mac
I installed Python 3 on Mac and installed some packages as well. But then I see AWS lamda does not support Python 3 so I decided to downgrade. I removed Python3 folder in Applications and cleared the trash. But still I see a folder named 3 in /Library/Frameworks/Python.framework/Versions which is causing problems, such as this: $ python3 -m pip install virtualenv Requirement already satisfied: virtualenv in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (20.14.1) Requirement already satisfied: platformdirs<3,>=2 in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from virtualenv) (2.5.2) So my question is how do I completely uninstall python 3 from my Mac?
Removing the app does not completely uninstall that version of Python. You will need to remove the framework directories and their symbolic links. Deleting the frameworks sudo rm -rf /Library/Frameworks/Python.framework/Versions/[version number] replacing [version number] with 3.10 in your case. Removing symbolic links To list the broken symbolic links. ls -l /usr/local/bin | grep '../Library/Frameworks/Python.framework/Versions/[version number]' And to remove these links: cd /usr/local/bin ls -l /usr/local/bin | grep '../Library/Frameworks/Python.framework/Versions/[version number]' | awk '{print $9}' | tr -d @ | xargs rm As always, please be wary of copying these commands. Please make sure the directories in the inputs are actual working directories before you execute anything. The general idea in the end is to remove the folders and symlinks, and you're good to go. Here is another response addressing this process: How to uninstall Python 2.7 on a Mac OS X 10.6.4?
46
63
71,965,662
2022-4-22
https://stackoverflow.com/questions/71965662/python-black-style-discrepancy
I am using black to format my python code. I observed the following behavior. I admit that it is a very specific case but it goes on my nerves. Let's suppose I have the following code: @pytest.mark.parametrize( "first", ["second"], ) black does not change that. But, if I remove the trailing comma after ["second"]: @pytest.mark.parametrize( "first", ["second"] ) black makes a nice reordering: @pytest.mark.parametrize("first", ["second"]) On the other hand, let us see the following: @pytest.mark.parametrize( "This is a very long line. Black will not be able to to a linebreak here.", ["second"], ) In this case, again, black does not change anything. So far, so good. Now, I remove the trailing comma after ["second"] again: @pytest.mark.parametrize( "This is a very long line and black will not be able to do a line break here.", ["second"] ) As the line is too long, black does not do a nice reordering into one line as before. But instead, black adds a trailing comma after the ["second"]: @pytest.mark.parametrize( "This is a very long line and black will not be able to do a line break here.", ["second"], ) I do not like that. When I am confronted with such lists in my code, I always tend to remove the trailing comma to enforce one line reordering. But then, in most of the cases, black adds it again ...
The black behavior is expected from my understanding, because one of the purpose is to limit the number of changes inside a git diff. If the line is too long, you already are in a multiline writing. And if you are in a multiline writing, then it should end with a comma. That way if you add elements in the tuple, list, … or arguments in method or function, it will not generate a diff for the line where you add the comma. If a contrario you are in a mono-line writing that is not too long, black will consider if you have added a trailing comma. If trailing comma, it will go to multi-line, if not it will encourage mono-line. a = [1, 2,] # goes to a = [ 1, 2, ] # and a = [1, 2] # remains: a = [1, 2]
4
8
72,000,572
2022-4-25
https://stackoverflow.com/questions/72000572/how-to-change-python-version-of-azure-function
When I publish my azure cloud functions I get the message: Local python version '3.9.7' is different from the version expected for your deployed Function App. This may result in 'ModuleNotFound' errors in Azure Functions. Please create a Python Function App for version 3.9 or change the virtual environment on your local machine to match 'Python|3.8'. How can I change the version to 3.9?
You can view and set the linuxFxVersion from the Azure CLI. With the az functionapp config set command, you can change the linuxFxVersion setting in the function app. az functionapp config set --name <FUNCTION_APP> \ --resource-group <RESOURCE_GROUP> \ --linux-fx-version "PYTHON|3.9" Please refer Changing Python version for more information.
8
14
71,959,420
2022-4-21
https://stackoverflow.com/questions/71959420/client-init-missing-1-required-keyword-only-argument-intents-or-tak
I was trying to make a quick bot for discord and I used this code: import discord from discord.ui import button, view from discord.ext import commands client = discord.Client() @client.event async def on_ready(): print('Autenticazione riuscita. {0.user} è online!'.format(client)) But this error pops up: Client.__init__() missing 1 required keyword-only argument: 'intents' I tried providing an argument by putting something between the brackets, like this: import discord from discord.ui import button, view from discord.ext import commands client = discord.Client(0) @client.event async def on_ready(): print('Autenticazione riuscita. {0.user} è online!'.format(client)) But instead I get this error: Client.__init__() takes 1 positional argument but 2 were given On another PC the exact same code, with exact same modules and same Python version works just fine. What am I missing?
You could use the default Intents unless you have a particular one to specify. client = discord.Client(intents=discord.Intents.default()) As the first error message says, it is a keyword-only argument, so you cannot write discord.Client(discord.Intents.default()) without intents=. See Intents for more details.
32
53
71,982,525
2022-4-23
https://stackoverflow.com/questions/71982525/how-to-use-match-case-to-check-for-a-variables-type-in-python
I have this code to check whether or not a variable is a number or a Vector2: def __mul__(self, other): match type(other): case int | float: pass case Vector2: pass If I run this, I get SyntaxError: name capture 'int' makes remaining patterns unreachable And when I hover in vscode, it gives me: "int" is not accessed Irrefutable pattern allowed only as the last subpattern in an "or" pattern All subpatterns within an "or" pattern must target the same names Missing names: "float" Irrefutable pattern is allowed only for the last case statement If I remove | float it still won't work, so I can't make them separate cases.
Case with a variable (ex: case _: or case other:) needs to be the last case in the list. It matches any value, where the value was not matched by a previous case, and captures that value in the variable. A type can be used in a case, but implies isinstance(), testing to determine if the value being matched is an instance of that type. Therefore, the value used for the match should be the actual variable other rather than the type type(other), since type(other) is a type whose type would match type(). def __mul__(self, other): match other: case int() | float(): pass case Vector2(): pass
15
23
71,998,978
2022-4-25
https://stackoverflow.com/questions/71998978/early-stopping-in-pytorch
I tried to implement an early stopping function to avoid my neural network model overfit. I'm pretty sure that the logic is fine, but for some reason, it doesn't work. I want that when the validation loss is greater than the training loss over some epochs, the early stopping function returns True. But it returns False all the time, even though the validation loss becomes a lot greater than the training loss. Could you see where is the problem, please? early stopping function def early_stopping(train_loss, validation_loss, min_delta, tolerance): counter = 0 if (validation_loss - train_loss) > min_delta: counter +=1 if counter >= tolerance: return True calling the function during the training for i in range(epochs): print(f"Epoch {i+1}") epoch_train_loss, pred = train_one_epoch(model, train_dataloader, loss_func, optimiser, device) train_loss.append(epoch_train_loss) # validation with torch.no_grad(): epoch_validate_loss = validate_one_epoch(model, validate_dataloader, loss_func, device) validation_loss.append(epoch_validate_loss) # early stopping if early_stopping(epoch_train_loss, epoch_validate_loss, min_delta=10, tolerance = 20): print("We are at epoch:", i) break EDIT: The train and validation loss: EDIT2: def train_validate (model, train_dataloader, validate_dataloader, loss_func, optimiser, device, epochs): preds = [] train_loss = [] validation_loss = [] min_delta = 5 for e in range(epochs): print(f"Epoch {e+1}") epoch_train_loss, pred = train_one_epoch(model, train_dataloader, loss_func, optimiser, device) train_loss.append(epoch_train_loss) # validation with torch.no_grad(): epoch_validate_loss = validate_one_epoch(model, validate_dataloader, loss_func, device) validation_loss.append(epoch_validate_loss) # early stopping early_stopping = EarlyStopping(tolerance=2, min_delta=5) early_stopping(epoch_train_loss, epoch_validate_loss) if early_stopping.early_stop: print("We are at epoch:", e) break return train_loss, validation_loss
The problem with your implementation is that whenever you call early_stopping() the counter is re-initialized with 0. Here is working solution using an oo-oriented approch with __call__() and __init__() instead: class EarlyStopping: def __init__(self, tolerance=5, min_delta=0): self.tolerance = tolerance self.min_delta = min_delta self.counter = 0 self.early_stop = False def __call__(self, train_loss, validation_loss): if (validation_loss - train_loss) > self.min_delta: self.counter +=1 if self.counter >= self.tolerance: self.early_stop = True Call it like that: early_stopping = EarlyStopping(tolerance=5, min_delta=10) for i in range(epochs): print(f"Epoch {i+1}") epoch_train_loss, pred = train_one_epoch(model, train_dataloader, loss_func, optimiser, device) train_loss.append(epoch_train_loss) # validation with torch.no_grad(): epoch_validate_loss = validate_one_epoch(model, validate_dataloader, loss_func, device) validation_loss.append(epoch_validate_loss) # early stopping early_stopping(epoch_train_loss, epoch_validate_loss) if early_stopping.early_stop: print("We are at epoch:", i) break Example: early_stopping = EarlyStopping(tolerance=2, min_delta=5) train_loss = [ 642.14990234, 601.29278564, 561.98400879, 530.01501465, 497.1098938, 466.92709351, 438.2364502, 413.76028442, 391.5090332, 370.79074097, ] validate_loss = [ 509.13619995, 497.3125, 506.17315674, 497.68960571, 505.69918823, 459.78610229, 480.25592041, 418.08630371, 446.42675781, 372.09902954, ] for i in range(len(train_loss)): early_stopping(train_loss[i], validate_loss[i]) print(f"loss: {train_loss[i]} : {validate_loss[i]}") if early_stopping.early_stop: print("We are at epoch:", i) break Output: loss: 642.14990234 : 509.13619995 loss: 601.29278564 : 497.3125 loss: 561.98400879 : 506.17315674 loss: 530.01501465 : 497.68960571 loss: 497.1098938 : 505.69918823 loss: 466.92709351 : 459.78610229 loss: 438.2364502 : 480.25592041 We are at epoch: 6
34
21
71,935,608
2022-4-20
https://stackoverflow.com/questions/71935608/python-enum-auto-generating-warning-that-parameter-is-unfilled
I have the below code that defines an enum and uses enum.auto() to give entries generated values starting from 1: from enum import Enum, auto class Colors(Enum): RED = auto() BLUE = auto() YELLOW = auto() def main(): print(Colors.RED.value) print(Colors.BLUE.value) print(Colors.YELLOW.value) if __name__ == '__main__': main() Output: 1 2 3 The code works fine and used to not have any warnings, but after updating PyCharm today, I am now getting the following warning for auto(): Parameter(s) unfilled Possible callees: EnumMeta.__call__(cls: Type[_T], value, names: None = ...) EnumMeta.__call__(cls: EnumMeta, value: str, names: Union[str, Iterable[str], Iterable[Iterable[str]], Mapping[str, Any]], *, module: Optional[str] = ..., qualname: Optional[str] = ..., type: Optional[type] = ..., start: int = ..., boundary: Optional[FlagBoundary] = ...) EnumMeta.__call__(cls: Type[_T], value, names: None = ...) EnumMeta.__call__(cls: EnumMeta, value: str, names: Union[str, Iterable[str], Iterable[Iterable[str]], Mapping[str, Any]], *, module: Optional[str] = ..., qualname: Optional[str] = ..., type: Optional[type] = ..., start: int = ...) I checked the Python documentation but couldn't find anything relevant, as all the examples still use auto() without any parameters. I assume the new warning is because PyCharm is using updated Python linting rules. How do I resolve this warning? UPDATE 1: It seems that PyCharm is detecting enum.auto() as enum.auto(IntFlag), thus the warning that the parameter is unfilled: I will also report this issue to the PyCharm devs. Perhaps it's a bug. UPDATE 2: Nevermind, everyone. I just found out this was a bug and was reported a month ago here. UPDATE 3: The bug has finally been fixed! 🎉
Nevermind, everyone. I just found out this was a bug and was reported a month ago here. UPDATE: The bug has finally been fixed! 🎉
21
20
71,976,735
2022-4-23
https://stackoverflow.com/questions/71976735/running-djangos-collectstatic-in-dockerfile-produces-empty-directory
I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile: # Set up image FROM python:3.10 WORKDIR /usr/src/app ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Install poetry and identify Python dependencies RUN pip install poetry COPY pyproject.toml /usr/src/app/ # Install Python dependencies RUN set -x \ && apt update -y \ && apt install -y \ libpq-dev \ gcc \ && poetry config virtualenvs.create false \ && poetry install --no-ansi # Copy source into image COPY . /usr/src/app/ # Collect static files RUN python -m manage collectstatic -v 3 --no-input And here's the docker-compose.yml file I used to run the image: services: db: image: postgres env_file: - .env.docker.db volumes: - db:/var/lib/postgresql/data networks: - backend ports: - "5433:5432" web: build: . restart: always env_file: - .env.docker.web ports: - "8001:$PORT" volumes: - .:/usr/src/app depends_on: - db networks: - backend command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi volumes: db: networks: backend: driver: bridge The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts. I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on? UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line: volumes: - .:/usr/src/app If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows: command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi" Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.
You are overriding the content of /usr/src/app in your container when you added the volumes: - .:/usr/src/app to your docker compose file. Remove it since you already copied everything during the build.
4
5
71,969,496
2022-4-22
https://stackoverflow.com/questions/71969496/can-i-reduce-a-tuple-list-by-key-using-python
I am currently working on showing some visuals about how my NER model has performed. The data I currently have looks like this: counter_list = [ ('Name', {'p':0.56,'r':0.56,'f':0.56}), ('Designation', {'p':0.10,'r':0.20,'f':0.14}), ('Location', {'p':0.56,'r':0.56,'f':0.56}), ('Name', {'p':0.14,'r':0.14,'f':0.14}), ('Designation', {'p':0.10,'r':0.20,'f':0.14}), ('Location', {'p':0.56,'r':0.56,'f':0.56}) ] I would like to eliminate the duplicates and add their respective values to only one of each kind. So the output to look like this: [ ('Name', {'p':0.7,'r':0.7,'f':0.7}), ('Designation', {'p':0.2,'r':0.4,'f':0.28}), ('Location', {'p':1.12,'r':1.12,'f':1.12}) ] I have tried to use the reduce function but it gives me only the output for 'Name' entry only. result = functools.reduce(lambda x, y: (x[0], Counter(x[1])+Counter(y[1])) if x[0]==y[0] else (x[0],x[1]), counter_list) What would be the right approach? I am trying to create some visuals with the final results, to determine which item has the higher 'f','p' or 'r' component.
Why not use pandas and its ~.groupby method? >>> import pandas as pd >>> keys, data = zip(*counter_list) >>> df = pd.DataFrame(data=data, index=keys).groupby(level=0).sum() >>> df p r f Designation 0.20 0.40 0.28 Location 1.12 1.12 1.12 Name 0.70 0.70 0.70 and then do >>> list(df.T.to_dict().items()) [ ('Designation', {'p': 0.2, 'r': 0.4, 'f': 0.28}), ('Location', {'p': 1.12, 'r': 1.12, 'f': 1.12}), ('Name', {'p': 0.7, 'r': 0.7, 'f': 0.7}) ]
5
3
72,018,887
2022-4-26
https://stackoverflow.com/questions/72018887/how-to-build-a-universal-wheel-with-pyproject-toml
This is the project directory structure . ├── meow.py └── pyproject.toml 0 directories, 2 files This is the meow.py: def main(): print("meow world") This is the pyproject.toml: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "meowpkg" version = "0.1" description = "a package that meows" [project.scripts] meow_world = "meow:main" When building this package, no matter whether with python3 -m pip wheel . or using python3 -m build, it creates a file named like meowpkg-0.1-py3-none-any.whl which can not be installed on Python 2. $ python2.7 -m pip install meowpkg-0.1-py3-none-any.whl ERROR: meowpkg-0.1-py3-none-any.whl is not a supported wheel on this platform. But "meowpkg" actually works on Python 2 as well. How to instruct setuptools and/or wheel to create a universal wheel tagged like meowpkg-0.1-py2.py3-none-any.whl, without using the old setup.cfg/setup.py ways? Current workaround: echo "[bdist_wheel]\nuniversal=1" > setup.cfg && python3 -m build && rm setup.cfg
Add this section into the pyproject.toml: [tool.distutils.bdist_wheel] universal = true
15
15
71,950,802
2022-4-21
https://stackoverflow.com/questions/71950802/flask-cors-work-only-for-first-request-whats-the-bug-in-my-code
background There is a JS app serving at 127.0.0.1:8080, which refers some API serving at 127.0.0.1:5000 by a Flask app. [See FlaskCode] When I open this js app in Chrome, first request work well and the second request ends with CORS problem, [see ChromeDebug1]. Additionally, I found this 'OPTIONS' is response as 405 (method not allow) in Flask output, and the output from flask_cors is not same like first request. [see FlaskOut]. I'm a newbee in FE and python, so if it is stupid bug, please let me know. my env is MacOs M1 version11.1 Chrome Version 87.0 Python 3.8.2 Flask 2.1.1 Werkzeug 2.1.1 question It seems that flask_cors works only once in my code, but what's wrong? Look at the difference of first req and second req, seems that second reponse for 'OPTIONS' do not have headers ("Access-Control-Allow-Origin", "*")? why firest request not have log like flask_cors.extension:Request ====== second edit ========= Thanks for David's advice. I used tcp dump to capture networking, [See wireshark]. This OPTION request is standard in my opinion. So, it lead me to question 4. why flask print "{"examinationOID":"61e8d2248373a7329e12f29b"}OPTIONS /yd/pass-through/get-examination HTTP/1.1" 405 - while request not have body? Maybe printing is a trash object from last request which is not gc correctly, due to long connection and exception handling? I have only one python file, and run it with python ./demo2.py --log=INFO appendix FlaskCode # -*- coding: UTF-8 -*- from flask import Flask from flask import Response from flask_cors import CORS, cross_origin import logging import json app = Flask(__name__) CORS(app, supports_credentials=True) demoDataPath="xxx" @app.route("/yd/pass-through/get-examination", methods=['POST']) @cross_origin() def getexamination(): logging.getLogger('demo2').info('into getexamination') response = {} response["code"]=0 response["message"]="good end" f = open(demoDataPath+"/rsp4getexamination.json", "r") response["data"]= json.loads(f.read()) return Response(json.dumps(response), mimetype='application/json', status=200) @app.route("/yd/pass-through/report-config", methods=['POST']) @cross_origin() def getconfig(): logging.getLogger('demo2').info('into getconfig') response = {} response["code"]=0 response["message"]="good end" f = open(demoDataPath+"/rsp4getreportconfig.json", "r") response["data"]= json.loads(f.read()) return Response(json.dumps(response), mimetype='application/json', status=200) if __name__ == '__main__': logging.getLogger('flask_cors').level = logging.DEBUG logging.getLogger('werkzeug').level = logging.DEBUG logging.getLogger('demo2').level = logging.DEBUG app.logger.setLevel(logging.DEBUG) logging.info("app run") app.run(debug=True, threaded=True, port=5001) ChromeDebug1 FlaskOut DEBUG:flask_cors.core:CORS request received with 'Origin' http://127.0.0.1:8080 DEBUG:flask_cors.core:The request's Origin header matches. Sending CORS headers. DEBUG:flask_cors.core:Settings CORS headers: MultiDict([('Access-Control-Allow-Origin', 'http://127.0.0.1:8080'), ('Access-Control-Allow-Headers', 'content-type, traceid, withcredentials'), ('Access-Control-Allow-Methods', 'DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT'), ('Vary', 'Origin')]) DEBUG:flask_cors.extension:CORS have been already evaluated, skipping INFO:werkzeug:127.0.0.1 - - [21/Apr/2022 20:33:36] "OPTIONS /yd/pass-through/report-config HTTP/1.1" 200 - [2022-04-21 20:33:36,736] INFO in demo2: into getconfig INFO:demo2:into getconfig DEBUG:flask_cors.core:CORS request received with 'Origin' http://127.0.0.1:8080 DEBUG:flask_cors.core:The request's Origin header matches. Sending CORS headers. DEBUG:flask_cors.core:Settings CORS headers: MultiDict([('Access-Control-Allow-Origin', 'http://127.0.0.1:8080'), ('Vary', 'Origin')]) DEBUG:flask_cors.extension:CORS have been already evaluated, skipping INFO:werkzeug:127.0.0.1 - - [21/Apr/2022 20:33:36] "POST /yd/pass-through/report-config HTTP/1.1" 200 - DEBUG:flask_cors.extension:Request to '/yd/pass-through/get-examination' matches CORS resource '/*'. Using options: {'origins': ['.*'], 'methods': 'DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT', 'allow_headers': ['.*'], 'expose_headers': None, 'supports_credentials': True, 'max_age': None, 'send_wildcard': False, 'automatic_options': True, 'vary_header': True, 'resources': '/*', 'intercept_exceptions': True, 'always_send': True} DEBUG:flask_cors.core:CORS request received with 'Origin' http://127.0.0.1:8080 DEBUG:flask_cors.core:The request's Origin header matches. Sending CORS headers. DEBUG:flask_cors.core:Settings CORS headers: MultiDict([('Access-Control-Allow-Origin', 'http://127.0.0.1:8080'), ('Access-Control-Allow-Credentials', 'true'), ('Vary', 'Origin')]) DEBUG:flask_cors.extension:CORS have been already evaluated, skipping INFO:werkzeug:127.0.0.1 - - [21/Apr/2022 20:33:36] "{"examinationOID":"61e8d2248373a7329e12f29b"}OPTIONS /yd/pass-through/get-examination HTTP/1.1" 405 - wireshark
I was getting a similar error: param1=value1&param2=value2GET /css/base.css HTTP/1.1 where the leading params are from a POST request called just before. Setting threaded=False (as @david-k-hess suggested) helped. The whole story was: Browser submits a form using POST Flask server responds with a web page The web page contains /css/base.css in its head Browser downloads /css/base.css using GET Flask server returns HTTP 405 because it thinks the method is param1=value1&param2=value2GET Fix Fix 1 suggested by @david-k-hess: In Flask app.run(), add threaded=False Fix 2: It also seems that the problem was fixed by upgrading Flask and Werkzeug to 2.1.2 My environment: Python 3.9 Flask 2.1.1 Werkzeug 2.1.1 Other than that, my venv contains: Jinja2 MarkupSafe click colorama importlib-metadata itsdangerous pip setuptools wheel zipp
4
4
72,018,351
2022-4-26
https://stackoverflow.com/questions/72018351/compare-two-images-and-find-all-pixel-coordinates-that-differ
I have designed a program that compares two images and gives you the coordinates of the pixels that are different in both images and plots the using pygame. I do not mind having to use another library or to remake my whole code but it should ideally take less that 0.6s to process and it should not reduce file size, all I need it to do is to return the coordinates relative to the image My code: import cv2 import pygame from pygame.locals import * import time lib = 'Map1.png' lib2 = 'Map2.png' lib3 = () coordenatesx = () coordenatesy = () Read = list(cv2.imread(lib).astype("int")) Read2 = list(cv2.imread(lib2).astype("int")) counter = 0 pygame.init() flags = DOUBLEBUF screen = pygame.display.set_mode((500,500), flags, 4) start = time.process_time()#To tell me how long it takes for y in range(len(Read)):#y coords for x in range(len(Read[y])):#x coords all = list(Read[y][x])[0] all2 = list(Read2[y][x])[0] difference = (all)-(all2) if difference > 10 or difference < -10: #To see if the pixel's difference is in the boundary if not it is different and it gets plotted counter+=1 pygame.draw.rect(screen, (255, 0, 0), pygame.Rect(x, y, 1, 1)) pygame.display.update() print(time. process_time() - start) if counter >= (y * x) * 0.75: print('They are similar images') print('They are different by only :', str((counter / (y * x)) * 100), '%') else: print('They are different') print('They are different by:', str((counter / (y * x)) * 100), '%') pygame.display.update() image1 image2
You do not need to use for loop to do the same. Numpy makes things simple: it's easy to understand speed up operations Reading both your images in grayscale: img1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\m1.png', 0) 1mg2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\m2.png', 0) Subtracting them. cv2.subtract() takes care of normalization such that it doesn't return negative values. Coordinates having no change remain black (pixel intensity = 0) sub = cv2.subtract(img1, img2) Using numpy find the coordinates where changes are more than 0 coords = np.argwhere(sub > 0) # return first 10 elements of the array coords coords[:10] array([[ 0, 23], [ 0, 24], [ 0, 25], [ 0, 26], [ 0, 27], [ 0, 28], [ 0, 29], [ 0, 30], [ 0, 31], [ 0, 32]], dtype=int64) coords returns an array, which can be converted to a list: coords_list = coords.tolist() # return first 10 elements of the list: >>> coords_list[:10] [[0, 23], [0, 24], [0, 25], [0, 26], [0, 27], [0, 28], [0, 29], [0, 30], [0, 31], [0, 32]] Update: Based on the comment made by fmw42, if you are only looking for coordinates where difference between pixel intensities is less than or greater than a certain value (say 10); you could do the following: sub = cv2.absDiff(img1, img2) np.argwhere(sub > 10)
4
3
72,012,784
2022-4-26
https://stackoverflow.com/questions/72012784/apply-a-filter-to-an-automatically-joined-table
Here's my SQL setup create table a ( id serial primary key, ta text ); create table b ( id serial primary key, tb text, aid integer references a(id) not null ); Python: import sqlalchemy as sa import sqlalchemy.orm connection_url = "..." engine = sa.create_engine(connection_url, echo=True, future=True) mapper_registry = sa.orm.registry() class A: pass class B: pass mapper_registry.map_imperatively( B, sa.Table( 'b', mapper_registry.metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('tb', sa.String(50)), sa.Column('aid', sa.ForeignKey('a.id')), )) mapper_registry.map_imperatively( A, sa.Table( 'a', mapper_registry.metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('ta', sa.String(50)) ), properties={ 'blist': sa.orm.relationship(B, lazy='joined'), }, ) with sa.orm.Session(engine) as session: sel = sa.select(A) cur = session.execute(sel) for rec in cur.unique().all(): print(rec.A.ta, [b.tb for b in rec.A.blist]) This works fine so far, but now I need to apply a filter to the subtable (B) to include only rows that match the criteria. sel = sa.select(A).where(?WHAT?.like('search')) In other words, how do I write an equivalent of the following SQL in SqlAlchemy? SELECT * FROM a LEFT OUTER JOIN b ON a.id = b.aid WHERE b.tb like 'search' How about this one (which I expect to produce empty lists in the target class): SELECT * FROM a LEFT OUTER JOIN b ON a.id = b.aid AND b.tb like 'search'
The two solutions (for 2 asked questions) presented below rely on two sqlalchemy functions: sqlalchemy.orm.contains_eager to trick sqlalchemy that the desired relationship is already part of the query; and sqlalchemy.orm.Query.options to disable the default joinedload configured on the relationship. Question 1 SELECT * FROM a OUTER JOIN b ON a.id = b.aid WHERE b.tb like 'search' Answer 1 is achieved by this query with explanations in line: sel = ( sa.select(A) # disable 'joinedload' if it remains configured on the mapper; otherwise, the line below can be removed .options(sa.orm.lazyload(A.blist)) # join A.blist explicitely .outerjoin(B, A.blist) # or: .outerjoin(B, A.id == B.aid) # add the filter .filter(B.tb.like('search')) # trick/hint to SQ that the relationship objects are already returned in the query .options(sa.orm.contains_eager(A.blist)) ) Question 2 SELECT * FROM a OUTER JOIN b ON a.id = b.aid AND b.tb like 'search' Answer 2 is achieved by this query with explanations in line, but basically the .filter condition is moved into the join condition: sel = ( sa.select(A) # disable 'joinedload' if it remains configured on the mapper; otherwise, the line below can be removed .options(sa.orm.lazyload(A.blist)) # join A.blist explicitely including the filter .outerjoin(B, sa.and_(B.aid == A.id, B.tb.like('search'))) # trick/hint to SQ that the relationship objects are already returned in the query .options(sa.orm.contains_eager(A.blist)) ) Warning: you should be careful when using contains_eager and use it in the well defined scope, because you are basically "lying" to the SA model that you have loaded "all" related objects when you might not be. For the purpose of just querying the data it is usually totally fine, but working on modifying and adding to the relationships might lead to some strange results.
6
6
71,996,754
2022-4-25
https://stackoverflow.com/questions/71996754/how-to-enable-django-admin-sidebar-navigation-in-a-custom-view
I have a view inheriting from LoginRequiredMixin, TemplateView, which renders some data using the admin/base_site.html template as the base. I treat it as a part of the admin interface, so it requires an administrator login. I'd like to make this view a little bit more a part of the Django admin interface by enabling the standard sidebar navigation on the left-hand side. Note that I don't have a custom ModelAdmin definition anywhere, I simply render the template at some predefined URL. There are no models used in the interface either, it parses and displays data from database-unrelated sources. Currently, I just build the required context data manually, e.g.: data = super().get_context_data(**kwargs) data.update(**{ "is_popup": False, "site_header": None, "is_nav_sidebar_enabled": True, "has_permission": True, "title": "My title", "subtitle": None, "site_url": None, "available_apps": [] }) The sidebar is visible, but displays an error message: Adding an app to available_apps ("available_apps": ["my_app"]) doesn't help either: So my question is - how do I do that? Is there a class I can inherit from to achieve this behaviour? Or a method I can call to get all required context data for base_site.html? Or perhaps I should insert some information in my template? Perhaps I need an AdminSite object, or can somehow call methods of the default one?
By accident I noticed there is a DefaultAdminSite object in django.contrib.admin.sites, and it's instantiated as site. Therefore, in my case, simply using site is sufficient. from django.contrib.admin import AdminSite from django.contrib.admin.sites import site admin_site: AdminSite = site data.update(**admin_site.each_context(self.request)) Furthermore, turns out I can just use the apps in case importing site would be an issue, just like DefaultAdminSite does it: AdminSiteClass = import_string(apps.get_app_config("admin").default_site) self._wrapped = AdminSiteClass()
8
6
72,011,315
2022-4-26
https://stackoverflow.com/questions/72011315/permissionerror-winerror-32-the-process-cannot-access-the-file-because-it-is
I installed python-certifi-win32 package and after that, I am getting below error, when I import anything or pip install anything, the fail with the final error of PermissionError. I tried rebooting the box. It didn't work. I am unable to uninstall the package as pip is erroring out too. I am unable to figure out the exact reason why this error is happening. It doesn't seem to be code specific, seems related to the library I installed PS C:\Users\visha\PycharmProjects\master_test_runner> pip install python-certifi-win32 Traceback (most recent call last): File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\importlib\_common.py", line 89, in _tempfile os.write(fd, reader()) File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\importlib\abc.py", line 371, in read_bytes with self.open('rb') as strm: File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\importlib\_adapters.py", line 54, in open raise ValueError() ValueError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\Scripts\pip.exe\__main__.py", line 4, in <module> File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\cli\main.py", line 9, in <module> from pip._internal.cli.autocompletion import autocomplete File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\cli\autocompletion.py", line 10, in <module> from pip._internal.cli.main_parser import create_main_parser File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\cli\main_parser.py", line 8, in <module> from pip._internal.cli import cmdoptions File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\cli\cmdoptions.py", line 23, in <module> from pip._internal.cli.parser import ConfigOptionParser File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\cli\parser.py", line 12, in <module> from pip._internal.configuration import Configuration, ConfigurationError File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\configuration.py", line 21, in <module> from pip._internal.exceptions import ( File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_internal\exceptions.py", line 8, in <module> from pip._vendor.requests.models import Request, Response File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_vendor\requests\__init__.py", line 123, in <module> from . import utils File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\pip\_vendor\requests\utils.py", line 25, in <module> from . import certs File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\wrapt\importer.py", line 170, in exec_module notify_module_loaded(module) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\wrapt\decorators.py", line 470, in _synchronized return wrapped(*args, **kwargs) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\wrapt\importer.py", line 136, in notify_module_loaded hook(module) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\certifi_win32\wrapt_pip.py", line 35, in apply_patches import certifi File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\wrapt\importer.py", line 170, in exec_module notify_module_loaded(module) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\wrapt\decorators.py", line 470, in _synchronized return wrapped(*args, **kwargs) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\wrapt\importer.py", line 136, in notify_module_loaded hook(module) File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\certifi_win32\wrapt_certifi.py", line 20, in apply_patches certifi_win32.wincerts.CERTIFI_PEM = certifi.where() File "C:\Users\visha\PycharmProjects\GUI_Automation\venv\lib\site-packages\certifi\core.py", line 37, in where _CACERT_PATH = str(_CACERT_CTX.__enter__()) File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in __enter__ return next(self.gen) File "C:\Users\visha\AppData\Local\Programs\Python\Python310\lib\importlib\_common.py", line 95, in _tempfile os.remove(raw_path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\visha\\AppData\\Local\\Temp\\tmpy_tb8siv' PS C:\Users\visha\PycharmProjects\master_test_runner>
I ran into the same issue today. I corrected it by removing two *.pth files that were created when I had installed python-certifi-win32. This prevents python-certifi-win32 from loading when python is run. The files are listed below, and were located here: C:\Users\<username>\AppData\Local\Programs\Python\Python310\Lib\site-packages Files: python-certifi-win32-init.pth distutils-precedence.pth Removing these files allowed me to install/uninstall other modules.
14
32
72,017,146
2022-4-26
https://stackoverflow.com/questions/72017146/how-to-get-all-fuzzy-matching-substrings-between-two-strings-in-python
Say I have three example strings text1 = "Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours." text2 = "The time of discomfort was 3 days ago." text3 = "John was given a prescription of idx, 20mg to be given every four hours" If I got all the matching substrings of text2 and text3 with text1, I would get text1_text2_common = [ '3 days ago.', ] text2_text3_common = [ 'of', ] text1_text3_common = [ 'was', 'idx' 'every' 'hours' ] What I am looking for is a fuzzy matching, using something like the Levenshtein distance. So even if the substrings are not exact, if they are similar enough for a criteria, it would get selected as a substring. So ideally I am looking for something like this: text1_text3_common_fuzzy = [ 'prescription of idx, 20mg to be given every four hours' ]
Here is a code to calculate the similarity by fuzzy ratio between the sub-string of string1 and full-string of string2. The code can also handle sub-string of string2 and full-string of string1 and also sub-string of string1 and sub-string of string2. This one uses nltk to generate ngrams. Typical algorithm: Generate ngrams from the given first string. Example: text2 = "The time of discomfort was 3 days ago." total_length = 8 In the code the param has values 5, 6, 7, 8. param = 5 ngrams = ['The time of discomfort was', 'time of discomfort was 3', 'of discomfort was 3 days', 'discomfort was 3 days ago.'] Compare it with second string. Example: text1 = Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours. @param=5 compare The time of discomfort was vs text1 and get the fuzzy score compare time of discomfort was 3 vs text1 and get the fuzzy score and so on until all elements in ngrams_5 are finished Save sub-string if fuzzy score is greater than or equal to given threshold. @param=6 compare The time of discomfort was 3 vs text1 and get the fuzzy score and so on until @param=8 You can revise the code changing n_start to 5 or so, so that the ngrams of string1 will be compared to the ngrams of string2, in this case this is a comparison of sub-string of string1 and sub-string of string2. # Generate ngrams for string2 n_start = 5 # st2_length for n in range(n_start, st2_length + 1): ... For comparison I use: fratio = fuzz.token_set_ratio(fs1, fs2) Have a look at this also. You can try different ratios as well. Your sample 'prescription of idx, 20mg to be given every four hours' has a fuzzy score of 52. See sample console output. 7 prescription of idx, 20mg to be given every four hours 52 Code """ fuzzy_match.py https://stackoverflow.com/questions/72017146/how-to-get-all-fuzzy-matching-substrings-between-two-strings-in-python Dependent modules: pip install pandas pip install nltk pip install fuzzywuzzy pip install python-Levenshtein """ from nltk.util import ngrams import pandas as pd from fuzzywuzzy import fuzz # Sample strings. text1 = "Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours." text2 = "The time of discomfort was 3 days ago." text3 = "John was given a prescription of idx, 20mg to be given every four hours" def myprocess(st1: str, st2: str, threshold): """ Generate sub-strings from st1 and compare with st2. The sub-strings, full string and fuzzy ratio will be saved in csv file. """ data = [] st1_length = len(st1.split()) st2_length = len(st2.split()) # Generate ngrams for string1 m_start = 5 for m in range(m_start, st1_length + 1): # st1_length >= m_start # If m=3, fs1 = 'Patient has checked', 'has checked in', 'checked in for' ... # If m=5, fs1 = 'Patient has checked in for', 'has checked in for abdominal', ... for s1 in ngrams(st1.split(), m): fs1 = ' '.join(s1) # Generate ngrams for string2 n_start = st2_length for n in range(n_start, st2_length + 1): for s2 in ngrams(st2.split(), n): fs2 = ' '.join(s2) fratio = fuzz.token_set_ratio(fs1, fs2) # there are other ratios # Save sub string if ratio is within threshold. if fratio >= threshold: data.append([fs1, fs2, fratio]) return data def get_match(sub, full, colname1, colname2, threshold=50): """ sub: is a string where we extract the sub-string. full: is a string as the base/reference. threshold: is the minimum fuzzy ratio where we will save the sub string. Max fuzz ratio is 100. """ save = myprocess(sub, full, threshold) df = pd.DataFrame(save) if len(df): df.columns = [colname1, colname2, 'fuzzy_ratio'] is_sort_by_fuzzy_ratio_first = True if is_sort_by_fuzzy_ratio_first: df = df.sort_values(by=['fuzzy_ratio', colname1], ascending=[False, False]) else: df = df.sort_values(by=[colname1, 'fuzzy_ratio'], ascending=[False, False]) df = df.reset_index(drop=True) df.to_csv(f'{colname1}_{colname2}.csv', index=False) # Print to console. Show only the sub-string and the fuzzy ratio. High ratio implies high similarity. df1 = df[[colname1, 'fuzzy_ratio']] print(df1.to_string()) print() print(f'sub: {sub}') print(f'base: {full}') print() def main(): get_match(text2, text1, 'string2', 'string1', threshold=50) # output string2_string1.csv get_match(text3, text1, 'string3', 'string1', threshold=50) get_match(text2, text3, 'string2', 'string3', threshold=10) # Other param combo. if __name__ == '__main__': main() Console Output string2 fuzzy_ratio 0 discomfort was 3 days ago. 72 1 of discomfort was 3 days ago. 67 2 time of discomfort was 3 days ago. 60 3 of discomfort was 3 days 59 4 The time of discomfort was 3 days ago. 55 5 time of discomfort was 3 days 51 sub: The time of discomfort was 3 days ago. base: Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours. string3 fuzzy_ratio 0 be given every four hours 61 1 idx, 20mg to be given every four hours 58 2 was given a prescription of idx, 20mg to be given every four hours 56 3 to be given every four hours 56 4 John was given a prescription of idx, 20mg to be given every four hours 56 5 of idx, 20mg to be given every four hours 55 6 was given a prescription of idx, 20mg to be given every four 52 7 prescription of idx, 20mg to be given every four hours 52 8 given a prescription of idx, 20mg to be given every four hours 52 9 a prescription of idx, 20mg to be given every four hours 52 10 John was given a prescription of idx, 20mg to be given every four 52 11 idx, 20mg to be given every 51 12 20mg to be given every four hours 50 sub: John was given a prescription of idx, 20mg to be given every four hours base: Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours. string2 fuzzy_ratio 0 time of discomfort was 3 days ago. 41 1 time of discomfort was 3 days 41 2 time of discomfort was 3 40 3 of discomfort was 3 days 40 4 The time of discomfort was 3 days ago. 40 5 of discomfort was 3 days ago. 39 6 The time of discomfort was 3 days 39 7 The time of discomfort was 38 8 The time of discomfort was 3 35 9 discomfort was 3 days ago. 34 sub: The time of discomfort was 3 days ago. base: John was given a prescription of idx, 20mg to be given every four hours Sample CSV output string2_string1.csv Using Spacy similarity Here is the result of the comparison between sub-string of text3 and full text of text1 using spacy. The result below is intended to be compared with the 2nd table above to see which method presents a better ranking of similarity. I use the large model to get the result below. Code import spacy import pandas as pd nlp = spacy.load("en_core_web_lg") text1 = "Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours." text3 = "John was given a prescription of idx, 20mg to be given every four hours" text3_sub = [ 'be given every four hours', 'idx, 20mg to be given every four hours', 'was given a prescription of idx, 20mg to be given every four hours', 'to be given every four hours', 'John was given a prescription of idx, 20mg to be given every four hours', 'of idx, 20mg to be given every four hours', 'was given a prescription of idx, 20mg to be given every four', 'prescription of idx, 20mg to be given every four hours', 'given a prescription of idx, 20mg to be given every four hours', 'a prescription of idx, 20mg to be given every four hours', 'John was given a prescription of idx, 20mg to be given every four', 'idx, 20mg to be given every', '20mg to be given every four hours' ] data = [] for s in text3_sub: doc1 = nlp(s) doc2 = nlp(text1) sim = round(doc1.similarity(doc2), 3) data.append([s, text1, sim]) df = pd.DataFrame(data) df.columns = ['from text3', 'text1', 'similarity'] df = df.sort_values(by=['similarity'], ascending=[False]) df = df.reset_index(drop=True) df1 = df[['from text3', 'similarity']] print(df1.to_string()) print() print(f'text3: {text3}') print(f'text1: {text1}') Output from text3 similarity 0 was given a prescription of idx, 20mg to be given every four hours 0.904 1 John was given a prescription of idx, 20mg to be given every four hours 0.902 2 a prescription of idx, 20mg to be given every four hours 0.895 3 prescription of idx, 20mg to be given every four hours 0.893 4 given a prescription of idx, 20mg to be given every four hours 0.892 5 of idx, 20mg to be given every four hours 0.889 6 idx, 20mg to be given every four hours 0.883 7 was given a prescription of idx, 20mg to be given every four 0.879 8 John was given a prescription of idx, 20mg to be given every four 0.877 9 20mg to be given every four hours 0.877 10 idx, 20mg to be given every 0.835 11 to be given every four hours 0.834 12 be given every four hours 0.832 text3: John was given a prescription of idx, 20mg to be given every four hours text1: Patient has checked in for abdominal pain which started 3 days ago. Patient was prescribed idx 20 mg every 4 hours. It looks like the spacy method produces a nice ranking of similarity.
5
7
72,003,987
2022-4-25
https://stackoverflow.com/questions/72003987/pydantic-checking-if-list-field-is-unique
Currently, I am trying to create a pydantic model for a pandas dataframe. I would like to check if a column is unique by the following import pandas as pd from typing import List from pydantic import BaseModel class CustomerRecord(BaseModel): id: int name: str address: str class CustomerRecordDF(BaseModel): __root__: List[CustomerRecord] df = pd.DataFrame({'id':[1,2,3], 'name':['Bob','Joe','Justin'], 'address': ['123 Fake St', '125 Fake St', '123 Fake St']}) df_dict = df.to_dict(orient='records') CustomerRecordDF.parse_obj(df_dict) I would now like to run a validation here and have it fail since address is not unique. The following returns what I need from pydantic import root_validator class CustomerRecordDF(BaseModel): __root__: List[CustomerRecord] @root_validator(pre=True) def unique_values(cls, values): root_values = values.get('__root__') value_set = set() for value in root_values: print(value['address']) if value['address'] in value_set: raise ValueError('Duplicate Address') else: value_set.add(value['address']) return values CustomerRecordDF.parse_obj(df_dict) >>> ValidationError: 1 validation error for CustomerRecordDF __root__ Duplicate Address (type=value_error) but i want to be able to reuse this validator for other other dataframes I create and to also pass in this unique check on multiple columns. Not just address. Ideally something like the following from pydantic import root_validator class CustomerRecordDF(BaseModel): __root__: List[CustomerRecord] _validate_unique_name = root_unique_validator('name') _validate_unique_address = root_unique_validator('address')
You could use an inner function and the allow_reuse argument: def root_unique_validator(field): def validator(cls, values): # Use the field arg to validate a specific field ... return root_validator(pre=True, allow_reuse=True)(validator) Full example: import pandas as pd from typing import List from pydantic import BaseModel, root_validator class CustomerRecord(BaseModel): id: int name: str address: str def root_unique_validator(field): def validator(cls, values): root_values = values.get("__root__") value_set = set() for value in root_values: if value[field] in value_set: raise ValueError(f"Duplicate {field}") else: value_set.add(value[field]) return values return root_validator(pre=True, allow_reuse=True)(validator) class CustomerRecordDF(BaseModel): __root__: List[CustomerRecord] _validate_unique_name = root_unique_validator("name") _validate_unique_address = root_unique_validator("address") df = pd.DataFrame( { "id": [1, 2, 3], "name": ["Bob", "Joe", "Justin"], "address": ["123 Fake St", "125 Fake St", "123 Fake St"], } ) df_dict = df.to_dict(orient="records") CustomerRecordDF.parse_obj(df_dict) # Output: # pydantic.error_wrappers.ValidationError: 1 validation error for CustomerRecordDF # __root__ # Duplicate address (type=value_error) And if you use a duplicated name: # Here goes the most part of the full example above df = pd.DataFrame( { "id": [1, 2, 3], "name": ["Bob", "Joe", "Bob"], "address": ["123 Fake St", "125 Fake St", "127 Fake St"], } ) df_dict = df.to_dict(orient="records") CustomerRecordDF.parse_obj(df_dict) # Output: # pydantic.error_wrappers.ValidationError: 1 validation error for CustomerRecordDF # __root__ # Duplicate name (type=value_error) You could also receive more than one field and have a single root validator that validates all the fields. That will probably make the allow_reuse argument unnecessary.
5
1
71,937,745
2022-4-20
https://stackoverflow.com/questions/71937745/python-closest-match-between-two-string-columns
I am looking to get the closest match between two columns of string data type in two separate tables. I don't think the content matters too much. There are words that I can match by pre-processing the data (lower all letters, replace spaces and stop words, etc...) and doing a join. However I get around 80 matches out of over 350. It is important to know that the length of each table is different. I did try to use some code I found online but it isn't working: def Races_chien(df1,df2): myList = [] total = len(df1) possibilities = list(df2['Rasse']) s = SequenceMatcher(isjunk=None, autojunk=False) for idx1, df1_str in enumerate(df1['Race']): my_str = ('Progress : ' + str(round((idx1 / total) * 100, 3)) + '%') sys.stdout.write('\r' + str(my_str)) sys.stdout.flush() # get 1 best match that has a ratio of at least 0.7 best_match = get_close_matches(df1_str, possibilities, 1, 0.7) s.set_seq2(df1_str, best_match) myList.append([df1_str, best_match, s.ratio()]) return myList It says: TypeError: set_seq2() takes 2 positional arguments but 3 were given How can I make this work?
Here is an answer I finally got: from fuzzywuzzy import process, fuzz value = [] similarity = [] for i in df1.col: ratio = process.extract(i, df2.col, limit= 1) value.append(ratio[0][0]) similarity.append(ratio[0][1]) df1['value'] = pd.Series(value) df1['similarity'] = pd.Series(similarity) This will add the value with the closest match from df2 in df1 together with the similarity %
4
2