id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
82f75afadfc85c1a5db1ab6b5ef80b98a2ae0989
|
Stackoverflow Stackexchange
Q: Error related to service manager I'm running test cases in protractor and observing error message from point to point as "ERROR:service_manager.cc(425)] InterfaceProviderSpec prevented connection from: content_utility to: content_browser" which in return failing some of my test cases. I have not observed this error message before and observing it now after updates in chrome browser.
A: Find attached imageRun Sql server browser under services.
|
Q: Error related to service manager I'm running test cases in protractor and observing error message from point to point as "ERROR:service_manager.cc(425)] InterfaceProviderSpec prevented connection from: content_utility to: content_browser" which in return failing some of my test cases. I have not observed this error message before and observing it now after updates in chrome browser.
A: Find attached imageRun Sql server browser under services.
|
stackoverflow
|
{
"language": "en",
"length": 64,
"provenance": "stackexchange_0000F.jsonl.gz:864768",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44541989"
}
|
cbe5ee81423a9f17985869e256830a7a242c2ec0
|
Stackoverflow Stackexchange
Q: Does a context switch occurs when an interrupt is fired? A process's virtual address space contains 1 GB of kernel space:
Now I assume that this 1 GB of kernel space points to data and code related to the kernel (including the Interrupt Descriptor Table (IDT)).
Now let's say that some process is being executed by the CPU, and this process made a system call (fired the interrupt 0x80 (int 0x80)). What will happen is that the CPU will go to the IDT and execute the interrupt handler associated with the interrupt number 0x80.
Now will the CPU stays in the current process, and execute the interrupt handler from the kernel space of the current process (so no context switching occurs)?
|
Q: Does a context switch occurs when an interrupt is fired? A process's virtual address space contains 1 GB of kernel space:
Now I assume that this 1 GB of kernel space points to data and code related to the kernel (including the Interrupt Descriptor Table (IDT)).
Now let's say that some process is being executed by the CPU, and this process made a system call (fired the interrupt 0x80 (int 0x80)). What will happen is that the CPU will go to the IDT and execute the interrupt handler associated with the interrupt number 0x80.
Now will the CPU stays in the current process, and execute the interrupt handler from the kernel space of the current process (so no context switching occurs)?
|
stackoverflow
|
{
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:864785",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542050"
}
|
5e58d49e676f2cf731b3d86a450b8c03b6cd4a67
|
Stackoverflow Stackexchange
Q: How to block a commit, if submodules have changed? Is it possible to block a commit using a pre-commit hook, if:
*
*Submodules have uncommitted changes
*Submodules have unpushed changes to at least one remote
*Submodules are in a detached head state
A: Search for uncommitted changes: git submodule -q foreach git status --short. If there is any output — block commit.
Search for unpushed changes: git submodule -q foreach git branch --verbose | grep "ahead\|behind".
Check for detached HEAD: git submodule -q foreach git rev-parse --symbolic-full-name HEAD. If there is at least one 'HEAD' in the output — block commit.
|
Q: How to block a commit, if submodules have changed? Is it possible to block a commit using a pre-commit hook, if:
*
*Submodules have uncommitted changes
*Submodules have unpushed changes to at least one remote
*Submodules are in a detached head state
A: Search for uncommitted changes: git submodule -q foreach git status --short. If there is any output — block commit.
Search for unpushed changes: git submodule -q foreach git branch --verbose | grep "ahead\|behind".
Check for detached HEAD: git submodule -q foreach git rev-parse --symbolic-full-name HEAD. If there is at least one 'HEAD' in the output — block commit.
|
stackoverflow
|
{
"language": "en",
"length": 102,
"provenance": "stackexchange_0000F.jsonl.gz:864828",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542167"
}
|
598305eb5eab701d05729535ca54c1835c3dbc06
|
Stackoverflow Stackexchange
Q: How to use RUN clone git in dockerfile Instead of using the ADD or COPY command I would like the docker image to download the python script (aa.py) that I want to execute from my git.
In mygit there is only one file called aa.py.
This doesn't work:
FROM python:3
RUN git clone https://github.com/user/mygit.git
CMD [ "python3", "./aa.py" ]
Error message:
ERR /usr/local/bin/python: can't open file './aa.py': [Errno 2] No such file or directory
A: The best solution is to change docker working directory using WORKDIR. So your Dockerfile should look like this:
FROM python:3
RUN git clone https://github.com/user/mygit.git
WORKDIR mygit
CMD [ "python3", "./aa.py" ]
|
Q: How to use RUN clone git in dockerfile Instead of using the ADD or COPY command I would like the docker image to download the python script (aa.py) that I want to execute from my git.
In mygit there is only one file called aa.py.
This doesn't work:
FROM python:3
RUN git clone https://github.com/user/mygit.git
CMD [ "python3", "./aa.py" ]
Error message:
ERR /usr/local/bin/python: can't open file './aa.py': [Errno 2] No such file or directory
A: The best solution is to change docker working directory using WORKDIR. So your Dockerfile should look like this:
FROM python:3
RUN git clone https://github.com/user/mygit.git
WORKDIR mygit
CMD [ "python3", "./aa.py" ]
A: Problem here is aa.py file is in your current working directory
change the Dockerfile content to
FROM python:3
RUN git clone https://github.com/user/mygit.git
WORKDIR mygit
CMD [ "python3", "./aa.py" ]
OR
FROM python:3
RUN git clone https://github.com/user/mygit.git
CMD [ "python3", "mygit/aa.py" ]
A: Your problem is that CMD instruction can't find file aa.py.
You have to specify complete path to your aa.py which, if you didn't change the working directory will be /project_name/aa.py.
|
stackoverflow
|
{
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:864853",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542253"
}
|
c38899b1512b4c369d5bf28fafa7d48e645d517f
|
Stackoverflow Stackexchange
Q: How to read Azure web site app settings values I am trying to configure some key/value pairs for my Azure web application using app settings section on Windows Azure preview portal.
Now I am trying to read values like below
ConfigurationManager.AppSettings["MyWebApp.DbConnectionString"];
but it returns null values.
Reading app settings from Web.config in my web application works fine.
A: I found the solution.
Keep values in web.config as well as in Azure App setting. When you are running/debugging application on your local environment it picks values from web.config.
When you deploy application on Azure it picks values from App setting.
//Below code work for both.
ConfigurationManager.AppSettings["KeyName"]
Keep key name same in web.config as well as in Azure app setting.
|
Q: How to read Azure web site app settings values I am trying to configure some key/value pairs for my Azure web application using app settings section on Windows Azure preview portal.
Now I am trying to read values like below
ConfigurationManager.AppSettings["MyWebApp.DbConnectionString"];
but it returns null values.
Reading app settings from Web.config in my web application works fine.
A: I found the solution.
Keep values in web.config as well as in Azure App setting. When you are running/debugging application on your local environment it picks values from web.config.
When you deploy application on Azure it picks values from App setting.
//Below code work for both.
ConfigurationManager.AppSettings["KeyName"]
Keep key name same in web.config as well as in Azure app setting.
A: In Azure, there are a few different ways of retrieving Application Settings and Connection Strings. However, connection strings work a little differently than vanilla application settings.
Application Settings can be retrieved by any method, regardless of whether or not they are present in the Web.config file.
Connection Strings can also be retrieved by any method if the string is defined in Web.config. However, if the connection string is NOT defined in Web.config, then it can only be retrieved using the Environment Variable method.
Retrieving as Environment Variable
Environment.GetEnvironmentVariable("APPSETTING_my-setting-key");
Environment.GetEnvironmentVariable("SQLAZURECONNSTR_my-connection-string-key");
Note that the keys must be prepended with a string designating their type when using this method.
All Application Settings use the APPSETTING_ prefix.
Connection Strings have a different prefix depending on the type of database selected when creating the string in the portal:
"Sql Databases" --> "SQLAZURECONNSTR_my-connection-string-key"
"SQL Server" --> "SQLCONNSTR_my-connection-string-key"
"MySQL" --> "MYSQLCONNSTR_my-connection-string-key"
"Custom" --> "CUSTOMCONNSTR_my-connection-string-key"
For a full overview, see the Windows Azure Web Sites documentation.
A: System.Environment.GetEnvironmentVariable("SERVICEBUS_CONNECTION")
works great!
|
stackoverflow
|
{
"language": "en",
"length": 282,
"provenance": "stackexchange_0000F.jsonl.gz:864901",
"question_score": "31",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542409"
}
|
aebc000cd4fed517051b8610d8e855f1461265c3
|
Stackoverflow Stackexchange
Q: DataFrame' object has no attribute 'sort' hello~could u help me to solve the questions?
【Anaconda3-4.4.0】
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('mysql+pymysql://root:123456@localhost:3306/mysql?charset=utf8')
sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
counts = [ i['fullURLId'].value_counts() for i in sql]
counts = pd.concat(counts).groupby(level=0).sum()
counts = counts.reset_index()
counts.columns = ['index', 'num']
counts['type'] = counts['index'].str.extract('(\d{3})')
counts_ = counts[['type', 'num']].groupby('type').sum()
the above codes are normal,but if I add the sentence below,python warns“'DataFrame' object has no attribute 'sort'”
counts_.sort('num', ascending = False)
A: ...Question solved.
The last code should be "counts_.sort_values('num',ascending=False)" instead.
|
Q: DataFrame' object has no attribute 'sort' hello~could u help me to solve the questions?
【Anaconda3-4.4.0】
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('mysql+pymysql://root:123456@localhost:3306/mysql?charset=utf8')
sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
counts = [ i['fullURLId'].value_counts() for i in sql]
counts = pd.concat(counts).groupby(level=0).sum()
counts = counts.reset_index()
counts.columns = ['index', 'num']
counts['type'] = counts['index'].str.extract('(\d{3})')
counts_ = counts[['type', 'num']].groupby('type').sum()
the above codes are normal,but if I add the sentence below,python warns“'DataFrame' object has no attribute 'sort'”
counts_.sort('num', ascending = False)
A: ...Question solved.
The last code should be "counts_.sort_values('num',ascending=False)" instead.
|
stackoverflow
|
{
"language": "en",
"length": 90,
"provenance": "stackexchange_0000F.jsonl.gz:864943",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542563"
}
|
a7c9ff167a7b60ff67470fb60f5e7d68bd367f7c
|
Stackoverflow Stackexchange
Q: Use Laravel Notification for Email Verification You can send notifications via email like this in Laravel...
<?php
public function toMail($notifiable)
{
$url = url('/invoice/' . $this->invoice->id);
return (new MailMessage)
->greeting('Hello!')
->line('One of your invoices has been paid!')
->action('View Invoice', $url)
->line('Thank you for using our application!');
}
However, is it a good approach (in software design) to use this feature to send verification emails upon user registration?
A: yes, it is a fast way to send notification, we register a greeting, a line of text, a call to action, and then another line of text. These methods provided by the MailMessage object make it simple and fast to format small transactional emails. The mail channel will then translate the message components into a nice, responsive HTML email template with a plain-text counterpart.
you can also formatting the notification in better way for example:
*
*Error Messages
*Customizing The Recipient
*Customizing The Subject
*Customizing The Templates
reference Laravel Reference
|
Q: Use Laravel Notification for Email Verification You can send notifications via email like this in Laravel...
<?php
public function toMail($notifiable)
{
$url = url('/invoice/' . $this->invoice->id);
return (new MailMessage)
->greeting('Hello!')
->line('One of your invoices has been paid!')
->action('View Invoice', $url)
->line('Thank you for using our application!');
}
However, is it a good approach (in software design) to use this feature to send verification emails upon user registration?
A: yes, it is a fast way to send notification, we register a greeting, a line of text, a call to action, and then another line of text. These methods provided by the MailMessage object make it simple and fast to format small transactional emails. The mail channel will then translate the message components into a nice, responsive HTML email template with a plain-text counterpart.
you can also formatting the notification in better way for example:
*
*Error Messages
*Customizing The Recipient
*Customizing The Subject
*Customizing The Templates
reference Laravel Reference
A: Laravel Notifications is an all new feature coming to Laravel 5.3 that allows you to make quick notification updates through services like Slack, SMS, Email, and more.
This is great. Notifications are so simple and robust, you may no longer find yourself needing to use any other notification tool (mail, Slack SDK directly, etc.)—especially when you see how many custom notification channels the community has created. It's bonkers.
As always, with great power comes great responsibility; make sure you're being careful with your users' time and attention and you don't go overboard with the notifications.
So, go forth. Notify.
|
stackoverflow
|
{
"language": "en",
"length": 259,
"provenance": "stackexchange_0000F.jsonl.gz:864953",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542582"
}
|
b29b5bcb0dc83430dc20231ba60cbe98cc7559dc
|
Stackoverflow Stackexchange
Q: EVP_get_cipherbyname always returns null I have a problem when calling EVP_get_cipherbyname on macOS:
const char *cipher_str = "aes-256-cbc";
const evp_cipher_st *cipher1 = EVP_aes_256_cbc();
const evp_cipher_st *cipher2 = EVP_get_cipherbyname(cipher_str);
In the code above, cipher1 will always be set to a valid evp_cipher_st * object, and cipher2 will always be null. I haven't found a single instance of cipher_str that produces a non-null cipher2.
Am I doing something wrong? Are there some other calls I should be making to get this to work?
A: You need to initialize the OpenSSL library first. If you just use libcrypto,
call:
OpenSSL_add_all_algorithms();
Refer to https://wiki.openssl.org/index.php/Library_Initialization for how to handle other situations or openssl versions.
|
Q: EVP_get_cipherbyname always returns null I have a problem when calling EVP_get_cipherbyname on macOS:
const char *cipher_str = "aes-256-cbc";
const evp_cipher_st *cipher1 = EVP_aes_256_cbc();
const evp_cipher_st *cipher2 = EVP_get_cipherbyname(cipher_str);
In the code above, cipher1 will always be set to a valid evp_cipher_st * object, and cipher2 will always be null. I haven't found a single instance of cipher_str that produces a non-null cipher2.
Am I doing something wrong? Are there some other calls I should be making to get this to work?
A: You need to initialize the OpenSSL library first. If you just use libcrypto,
call:
OpenSSL_add_all_algorithms();
Refer to https://wiki.openssl.org/index.php/Library_Initialization for how to handle other situations or openssl versions.
|
stackoverflow
|
{
"language": "en",
"length": 110,
"provenance": "stackexchange_0000F.jsonl.gz:864963",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542613"
}
|
9ac4b08e8377812ccec53f124ef04ca1797557d6
|
Stackoverflow Stackexchange
Q: Hadoop cluster with docker swarm I'm trying to setup a hadoop cluster inside a docker swarm with multiple hosts, with a datanode on each docker node with a mounted volume.I made some tests and works fine, but the problem comes when a datanode dies and then return.
I restarted 2 host at the same time and when the containers run again, they get a new ip. The problem is that the namemode give a error because it thinks it is another datanode.
ERROR org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node 10.0.0.13:50010 is attempting to report storage ID 3a7b556f-7364-460e-beac-173132d77503. Node 10.0.0.9:50010 is expected to serve this storage.
Is is possible to prevent docker to assign a new ip, and instead keep the last ip after a restart?
Or there are any option for Hadoop config to fix this?
A: Static DHCP addresses for containers accessing an overlay network are officially not supported for the time being, as told here: https://github.com/moby/moby/issues/31860.
I hope, that docker will provide a solution for this very soon.
|
Q: Hadoop cluster with docker swarm I'm trying to setup a hadoop cluster inside a docker swarm with multiple hosts, with a datanode on each docker node with a mounted volume.I made some tests and works fine, but the problem comes when a datanode dies and then return.
I restarted 2 host at the same time and when the containers run again, they get a new ip. The problem is that the namemode give a error because it thinks it is another datanode.
ERROR org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node 10.0.0.13:50010 is attempting to report storage ID 3a7b556f-7364-460e-beac-173132d77503. Node 10.0.0.9:50010 is expected to serve this storage.
Is is possible to prevent docker to assign a new ip, and instead keep the last ip after a restart?
Or there are any option for Hadoop config to fix this?
A: Static DHCP addresses for containers accessing an overlay network are officially not supported for the time being, as told here: https://github.com/moby/moby/issues/31860.
I hope, that docker will provide a solution for this very soon.
|
stackoverflow
|
{
"language": "en",
"length": 170,
"provenance": "stackexchange_0000F.jsonl.gz:864969",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542637"
}
|
ddc858d0a3989e32f8d6ded68a624523d68e63ed
|
Stackoverflow Stackexchange
Q: Why @Primary does not work for Kotlin classes sometimes? I have weirdest problem.
I have Java class A and I also have Kotlin class KA which extends A, both are @Components, KA is also annotated with @Primary.
In some components KA is autowired, in others A does.
Actually it's even more weird than that, for the same dependent bean between different application launches sometimes KA gets autowired, sometimes A.
If I rewrite KA in Java, then everything works as expected.
Attribute name/constructor parameter name in all the dependent classes is the same: @Autowired A a;.
Also it doesn't matter if my Kotlin implementation implements a common interface or extends a base class.
All Kotlin and Java classes live in src/main/java.
Kotlin version is 1.1.2-5, I use jvm8.
|
Q: Why @Primary does not work for Kotlin classes sometimes? I have weirdest problem.
I have Java class A and I also have Kotlin class KA which extends A, both are @Components, KA is also annotated with @Primary.
In some components KA is autowired, in others A does.
Actually it's even more weird than that, for the same dependent bean between different application launches sometimes KA gets autowired, sometimes A.
If I rewrite KA in Java, then everything works as expected.
Attribute name/constructor parameter name in all the dependent classes is the same: @Autowired A a;.
Also it doesn't matter if my Kotlin implementation implements a common interface or extends a base class.
All Kotlin and Java classes live in src/main/java.
Kotlin version is 1.1.2-5, I use jvm8.
|
stackoverflow
|
{
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:864979",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542678"
}
|
abfaa0c9fa3af72d3179c37dbe6e43337253f59d
|
Stackoverflow Stackexchange
Q: Chrome 59 and Basic Authentication with Selenium/Fluentlenium Chrome 59 has removed support for https://user:[email protected] URLs.
I have a test which was using this feature which has now broken, so I'm trying to replace it with a version which waits for the authentication popup and fills in the details. But the following doesn't work on Chrome (which doesn't see the auth popup as an alert):
alert().authenticateUsing(new UserAndPassword("test", "test"));
The selenium-only version has the same issue:
WebDriverWait wait = new WebDriverWait(getDriver(), 10);
Alert alert = wait.until(ExpectedConditions.alertIsPresent());
alert.authenticateUsing(new UserAndPassword("test", "test"));
(based on the answer given here: How to handle authentication popup with Selenium WebDriver using Java)
I can see several workarounds for handling this in FireFox, but nothing for Chrome. Is there any alternative approach?
A: One solution is to run a transparent proxy to inject the header with the required credentials.
But another and easier solution is to create a small extension to automatically set the credentials:
https://gist.github.com/florentbr/25246cd9337cebc07e2bbb0b9bf0de46
|
Q: Chrome 59 and Basic Authentication with Selenium/Fluentlenium Chrome 59 has removed support for https://user:[email protected] URLs.
I have a test which was using this feature which has now broken, so I'm trying to replace it with a version which waits for the authentication popup and fills in the details. But the following doesn't work on Chrome (which doesn't see the auth popup as an alert):
alert().authenticateUsing(new UserAndPassword("test", "test"));
The selenium-only version has the same issue:
WebDriverWait wait = new WebDriverWait(getDriver(), 10);
Alert alert = wait.until(ExpectedConditions.alertIsPresent());
alert.authenticateUsing(new UserAndPassword("test", "test"));
(based on the answer given here: How to handle authentication popup with Selenium WebDriver using Java)
I can see several workarounds for handling this in FireFox, but nothing for Chrome. Is there any alternative approach?
A: One solution is to run a transparent proxy to inject the header with the required credentials.
But another and easier solution is to create a small extension to automatically set the credentials:
https://gist.github.com/florentbr/25246cd9337cebc07e2bbb0b9bf0de46
A: I'm sure Florent B's solutions are viable, but for retro-fitting an old test, I found that zoonabar's solution posted to this duplicate question is easier to implement, takes considerably less code, and requires no special preparation of the test box. It also seems that it would be easier to follow for new developers looking at the code.
In short: visiting any URL with credentials before visiting the URL under test (without credentials) will cause the browser to remember the credentials.
goTo("http://user:password@localhost"); // Caches auth, but page itself is blocked
goTo("http://localhost"); // Uses cached auth, page renders fine
// Continue test as normal
This may feel like a vulnerability in the browser which will be patched, but I think this is unlikely; the restriction has been imposed to avoid phishing risks (where the username chosen looks like a domain, e.g. "http://google.com:long-token-here-which-makes-the-real-domain-disappear@example.com/"), and this workaround for setting credentials doesn't pose the same risk.
See zoonabar's answer
A: Over in https://bugs.chromium.org/p/chromium/issues/detail?id=435547#c33 you can see a mkwst saying there was a bug regarding basic auth credentials and same origin sites made it into stable.
If you use the "--disable-blink-features=BlockCredentialedSubresources" or go to a Chrome Canary build you may find that the original problem you were seeing is not happening any more...
A: Florent B. found a solution with the help of a chrome extension, that is added on the fly in the selenium test. The extenion handles the basic auth credentials, if requiered:
ChromeOptions options = new ChromeOptions();
options.addExtensions(new File("C:/path_to/credentials_extension.zip"));
driver = new RemoteWebDriver(new URL("http://127.0.0.1:9515"), options);
Chrome extension code:
https://gist.github.com/florentbr/25246cd9337cebc07e2bbb0b9bf0de46
(just modify username and password in background.js and then zip the files background.js and manifest.json to credentials_extension.zip)
Found here: Selenium - Basic Authentication via url
|
stackoverflow
|
{
"language": "en",
"length": 439,
"provenance": "stackexchange_0000F.jsonl.gz:864996",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542740"
}
|
67b845a5a0f0347e09caae4c73ea28679b7ed2c1
|
Stackoverflow Stackexchange
Q: What is the alternative for "toNotEqual" in Jasmine? I am trying to write Unit Test in Jasmine and in my code, I am comparing two objects for inequality.
I am using following code to do it:
expect(obj1).toNotEqual(obj2)
But getting following error:
TypeError: expect(...).toNotEqual is not a function
Can anyone please suggest how to resolve this?
A: It could have been more useful if you specified the Jasmine Version you are using.
But anyway answer to your question is .. all Jasmine versions 1.3,2.0,2.1 to 2.5 don't support toNotEqual and in case you want to check inequality you have to chain NOT to expect before the matches.
Use not.toEqual for check inequality of object.
expect(obj1).not.toEqual(obj2)
toEqual matches deep equality. It does a recursive search through
the objects to determine whether the values for their keys are
equivalent.
toBe matches primitive types.
|
Q: What is the alternative for "toNotEqual" in Jasmine? I am trying to write Unit Test in Jasmine and in my code, I am comparing two objects for inequality.
I am using following code to do it:
expect(obj1).toNotEqual(obj2)
But getting following error:
TypeError: expect(...).toNotEqual is not a function
Can anyone please suggest how to resolve this?
A: It could have been more useful if you specified the Jasmine Version you are using.
But anyway answer to your question is .. all Jasmine versions 1.3,2.0,2.1 to 2.5 don't support toNotEqual and in case you want to check inequality you have to chain NOT to expect before the matches.
Use not.toEqual for check inequality of object.
expect(obj1).not.toEqual(obj2)
toEqual matches deep equality. It does a recursive search through
the objects to determine whether the values for their keys are
equivalent.
toBe matches primitive types.
|
stackoverflow
|
{
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:864997",
"question_score": "18",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542743"
}
|
1f4c80754dc3c58ff663dcbd034671bbc5a24d38
|
Stackoverflow Stackexchange
Q: How to secure .env file in laravel 5.4? I am Working with laravel 5.4. And i have problem with .env and composer.json file. Anyone can access from any browser and anyone can see my database credentials so please help me to protect this files.
A: Simply you add below code to your .htaccess file to set permission of .env and composer.json file.
<Files .env>
Order allow,deny
Deny from all
</Files>
<Files composer.json>
Order allow,deny
Deny from all
</Files>
And below line for disabling directory browsing
Options All -Indexes
|
Q: How to secure .env file in laravel 5.4? I am Working with laravel 5.4. And i have problem with .env and composer.json file. Anyone can access from any browser and anyone can see my database credentials so please help me to protect this files.
A: Simply you add below code to your .htaccess file to set permission of .env and composer.json file.
<Files .env>
Order allow,deny
Deny from all
</Files>
<Files composer.json>
Order allow,deny
Deny from all
</Files>
And below line for disabling directory browsing
Options All -Indexes
A: Remember that once your server is configured to see the public folder as the document root, no one can view the files that one level down that folder, which means that your .env file is already protected, as well your entire application. - That is the reason the public folder is there, security. - The only directories that you can see in your browser if you set the document root to the public folder is the folders that are there, like the styles and scripts.
You can make a test like this:
Enter in your project directory with the terminal and hit this:
php -t public -S 127.0.0.1:80
The -t means the document root, where the PHP built-in web server will interpreter as the document root. - see bellow:
-t <docroot> Specify document root <docroot> for built-in web server.
Now try to access the .env file, and you will see that you will get a 404 that the resource as not found.
Of course it's just an example, you will need to configure your sever to do the same.
A: Nobody can view these files via the browser because the root of your website is located at /public and the composer.json and .env files are outside of this scope.
The only way to view these files is actually connecting to the web server and going to the corresponding folder.
A: you can add following code to your .htaccess (make sure your .htaccess file should be in root folder not in public)file to deny the permission of .env file
<FilesMatch "^\.env">
Order allow,deny
Deny from all
</FilesMatch>
A: Make sure it is on your .gitignore and you create it locally on your server.
|
stackoverflow
|
{
"language": "en",
"length": 372,
"provenance": "stackexchange_0000F.jsonl.gz:864999",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542753"
}
|
5ddc00430b4ae29182f34213fef0f2f63592537b
|
Stackoverflow Stackexchange
Q: Updating meteor from 1.4.3.2 to 1.5 causes a build error with ecmascript package on MacOS I have updated meteor for the project to 1.5 and have done the following:
*
*ran meteor update
*reinstalled node_modules
*ran meteor reset
The relevant github issue
The error message is:
While processing files with ecmascript (for target web.browser): /tools/isobuild/compiler-plugin.js:376:52: Cannot read property 'id' of null
at InputFile.resolve (/tools/isobuild/compiler-plugin.js:376:52)
at InputFile._require (/tools/isobuild/compiler-plugin.js:386:25)
at InputFile.require (/tools/isobuild/compiler-plugin.js:380:17)
at requireWithPrefix (packages/babel-compiler.js:428:32)
at requireWithPath (packages/babel-compiler.js:357:14)
at resolveHelper (packages/babel-compiler.js:330:22)
at packages/babel-compiler.js:306:19
at Array.forEach (native)
at walkHelper (packages/babel-compiler.js:305:22)
at walkBabelRC (packages/babel-compiler.js:295:7)
at resolveHelper (packages/babel-compiler.js:333:11)
at packages/babel-compiler.js:306:19
at Array.forEach (native)
at walkHelper (packages/babel-compiler.js:305:22)
at walkBabelRC (packages/babel-compiler.js:295:7)
at BabelCompiler.BCp._inferHelper (packages/babel-compiler.js:380:3)
at BabelCompiler.BCp._inferFromPackageJson (packages/babel-compiler.js:268:17)
at BabelCompiler.BCp.inferExtraBabelOptions (packages/babel-compiler.js:237:10)
at BabelCompiler.BCp.processOneFileForTarget (packages/babel-compiler.js:166:10)
at BabelCompiler.<anonymous> (packages/babel-compiler.js:109:26)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget (packages/babel-compiler.js:108:14)
|
Q: Updating meteor from 1.4.3.2 to 1.5 causes a build error with ecmascript package on MacOS I have updated meteor for the project to 1.5 and have done the following:
*
*ran meteor update
*reinstalled node_modules
*ran meteor reset
The relevant github issue
The error message is:
While processing files with ecmascript (for target web.browser): /tools/isobuild/compiler-plugin.js:376:52: Cannot read property 'id' of null
at InputFile.resolve (/tools/isobuild/compiler-plugin.js:376:52)
at InputFile._require (/tools/isobuild/compiler-plugin.js:386:25)
at InputFile.require (/tools/isobuild/compiler-plugin.js:380:17)
at requireWithPrefix (packages/babel-compiler.js:428:32)
at requireWithPath (packages/babel-compiler.js:357:14)
at resolveHelper (packages/babel-compiler.js:330:22)
at packages/babel-compiler.js:306:19
at Array.forEach (native)
at walkHelper (packages/babel-compiler.js:305:22)
at walkBabelRC (packages/babel-compiler.js:295:7)
at resolveHelper (packages/babel-compiler.js:333:11)
at packages/babel-compiler.js:306:19
at Array.forEach (native)
at walkHelper (packages/babel-compiler.js:305:22)
at walkBabelRC (packages/babel-compiler.js:295:7)
at BabelCompiler.BCp._inferHelper (packages/babel-compiler.js:380:3)
at BabelCompiler.BCp._inferFromPackageJson (packages/babel-compiler.js:268:17)
at BabelCompiler.BCp.inferExtraBabelOptions (packages/babel-compiler.js:237:10)
at BabelCompiler.BCp.processOneFileForTarget (packages/babel-compiler.js:166:10)
at BabelCompiler.<anonymous> (packages/babel-compiler.js:109:26)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget (packages/babel-compiler.js:108:14)
|
stackoverflow
|
{
"language": "en",
"length": 126,
"provenance": "stackexchange_0000F.jsonl.gz:865055",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44542914"
}
|
d83b46dd3e048e4e8c473501b022f973664b1c00
|
Stackoverflow Stackexchange
Q: App Engine Instances kept alive after new version installation Every time I upload a new version of my app engine, the old version automatically stops. But in one of my projects, there is a bug, and every time I upload a new versions, it does not kill the older version. This bug result in me paying for 16 different running instances last month because we had a lot of new versions:
How can I make sure it never happen again?
A: gcloud app deploy does not remove previous versions
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be replaced then next time you deploy.
|
Q: App Engine Instances kept alive after new version installation Every time I upload a new version of my app engine, the old version automatically stops. But in one of my projects, there is a bug, and every time I upload a new versions, it does not kill the older version. This bug result in me paying for 16 different running instances last month because we had a lot of new versions:
How can I make sure it never happen again?
A: gcloud app deploy does not remove previous versions
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be replaced then next time you deploy.
|
stackoverflow
|
{
"language": "en",
"length": 117,
"provenance": "stackexchange_0000F.jsonl.gz:865087",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543039"
}
|
f9d20d51fbcf499b8e615d8184bb9bd8ab7f15e2
|
Stackoverflow Stackexchange
Q: GitKraken - How to exit fullscreen mode? I've entered fullscreen mode and now it seems impossible to exit it via the GUI.
How can I leave fullscreen mode again once entered?
A: Ctrl+Shift+F
or edit ~/.gitkraken/config
|
Q: GitKraken - How to exit fullscreen mode? I've entered fullscreen mode and now it seems impossible to exit it via the GUI.
How can I leave fullscreen mode again once entered?
A: Ctrl+Shift+F
or edit ~/.gitkraken/config
A: Alt + F will display the menu again.
Then click on the "View" item and then click the "Toggle Full Screen" menu item.
A: You can change it in GitKraken settings. Change setting windowSettings/fullScreen to false in the file ~/.gitkraken/config
A: You can toggle Fullscreen with Ctrl+Shift+F.
|
stackoverflow
|
{
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:865100",
"question_score": "107",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543089"
}
|
4f313abb93f8d5340fdd44d1b30f8d11f1ee4244
|
Stackoverflow Stackexchange
Q: How to hide the Google Invisible reCAPTCHA badge When implementing the new Google Invisible reCATPTCHA, by default you get a little "protected by reCAPTCHA" badge in the bottom right of the screen that pops out when you roll over it.
I'd like to hide this.
A: A slight variant of Matthew Dowell's post which avoids the short flash, but displays whenever the contact form 7 form is visible:
div.grecaptcha-badge{
width:0 !important;
}
div.grecaptcha-badge.show{
width:256px !important;
}
I then added the following to the header.php in my child theme:
<script>
jQuery( window ).load(function () {
if( jQuery( '.wpcf7' ).length ){
jQuery( '.grecaptcha-badge' ).addClass( 'show' );
}
});
</script>
|
Q: How to hide the Google Invisible reCAPTCHA badge When implementing the new Google Invisible reCATPTCHA, by default you get a little "protected by reCAPTCHA" badge in the bottom right of the screen that pops out when you roll over it.
I'd like to hide this.
A: A slight variant of Matthew Dowell's post which avoids the short flash, but displays whenever the contact form 7 form is visible:
div.grecaptcha-badge{
width:0 !important;
}
div.grecaptcha-badge.show{
width:256px !important;
}
I then added the following to the header.php in my child theme:
<script>
jQuery( window ).load(function () {
if( jQuery( '.wpcf7' ).length ){
jQuery( '.grecaptcha-badge' ).addClass( 'show' );
}
});
</script>
A: Note: if you choose to hide the badge, please use
.grecaptcha-badge { visibility: hidden; }
You are allowed to hide the badge as long as you include the reCAPTCHA branding visibly in the user flow. Please include the following text:
This site is protected by reCAPTCHA and the Google
<a href="https://policies.google.com/privacy">Privacy Policy</a> and
<a href="https://policies.google.com/terms">Terms of Service</a> apply.
more details here reCaptacha
A: this does not disable the spam checking
div.g-recaptcha > div.grecaptcha-badge {
width:0 !important;
}
A: Google now allows to hide the Badge, from the FAQ :
I'd like to hide the reCAPTCHA badge. What is allowed?
You are allowed to hide the badge as long as you include the reCAPTCHA branding visibly in the user flow. Please include the following text:
This site is protected by reCAPTCHA and the Google
<a href="https://policies.google.com/privacy">Privacy Policy</a> and
<a href="https://policies.google.com/terms">Terms of Service</a> apply.
For example:
So you can simply hide it using the following CSS :
.grecaptcha-badge {
visibility: hidden;
}
Do not use display: none; as it appears to disable the spam checking (thanks @Zade)
A: Set the data-badge attribute to inline
<button type="submit" data-sitekey="your_site_key" data-callback="onSubmit" data-badge="inline" />
And add the following CSS
.grecaptcha-badge {
display: none;
}
A: My solution was to hide the badge, then display it when the user focuses on a form input - thus still adhering to Google's T&Cs.
Note: The reCAPTCHA I was tweaking had been generated by a WordPress plugin, so you may need to wrap the reCAPTCHA with a <div class="inv-recaptcha-holder"> ... </div> yourself.
CSS
.inv-recaptcha-holder {
visibility: hidden;
opacity: 0;
transition: linear opacity 1s;
}
.inv-recaptcha-holder.show {
visibility: visible;
opacity: 1;
transition: linear opacity 1s;
}
jQuery
$(document).ready(function () {
$('form input, form textarea').on( 'focus', function() {
$('.inv-recaptcha-holder').addClass( 'show' );
});
});
Obviously you can change the jQuery selector to target specific forms if necessary.
A: For users of Contact Form 7 on Wordpress this method is working for me:
I hide the v3 Recaptcha on all pages except those with Contact 7 Forms.
But this method should work on any site where you are using a unique class selector which can identify all pages with text input form elements.
First, I added a target style rule in CSS which can collapse the tile:
CSS
div.grecaptcha-badge.hide{
width:0 !important;
}
Then I added JQuery script in my header to trigger after the window loads so the 'grecaptcha-badge' class selector is available to JQuery, and can add the 'hide' class to apply the available CSS style.
$(window).load(function () {
if(!($('.wpcf7').length)){
$('.grecaptcha-badge').addClass( 'hide' );
}
});
My tile still will flash on every page for a half a second, but it's the best workaround I've found so far that I hope will comply. Suggestions for improvement appreciated.
A: For Google reCaptcha v3, the FAQ says:
You are allowed to hide the badge as long as you include the reCAPTCHA
branding visibly in the user flow. Please include the following text:
This site is protected by reCAPTCHA and the Google
<a href="https://policies.google.com/privacy">Privacy Policy</a> and
<a href="https://policies.google.com/terms">Terms of Service</a> apply.
For example:
Note: if you choose to hide the badge, please use
.grecaptcha-badge { visibility: hidden; }
It isn't clear whether it applies to reCaptcha v2. I suggest upgrading to v3 as it's a better experience for your visitors.
A: I have tested all approaches and:
WARNING: display: none DISABLES the spam checking!
visibility: hidden and opacity: 0 do NOT disable the spam checking.
Code to use:
.grecaptcha-badge {
visibility: hidden;
}
When you hide the badge icon, Google wants you to reference their service on your form by adding this:
<small>This site is protected by reCAPTCHA and the Google
<a href="https://policies.google.com/privacy">Privacy Policy</a> and
<a href="https://policies.google.com/terms">Terms of Service</a> apply.
</small>
A: If you are using the Contact Form 7 update and the latest version (version 5.1.x), you will need to install, setup Google reCAPTCHA v3 to use.
by default you get Google reCAPTCHA logo displayed on every page on the bottom right of the screen. This is according to our assessment is creating a bad experience for users. And your website, blog will slow down a bit (reflect by PageSpeed Score), by your website will have to load additional 1 JavaScript library from Google to display this badge.
You can hide Google reCAPTCHA v3 from CF7 (only show it when necessary) by following these steps:
First, you open the functions.php file of your theme (using File Manager or FTP Client). This file is locate in: /wp-content/themes/your-theme/ and add the following snippet (we’re using this code to remove reCAPTCHA box on every page):
remove_action( 'wp_enqueue_scripts', 'wpcf7_recaptcha_enqueue_scripts' );
Next, you will add this snippet in the page you want it to display Google reCAPTCHA (contact page, login, register page …):
if ( function_exists( 'wpcf7_enqueue_scripts' ) ) {
add_action( 'wp_enqueue_scripts', 'wpcf7_recaptcha_enqueue_scripts', 10, 0 );
}
Refer on OIW Blog - How To Remove Google reCAPTCHA Logo from Contact Form 7 in WordPress (Hide reCAPTCHA badge)
A: Since hiding the badge is not really legit as per the TOU, and existing placement options were breaking my UI and/or UX, I've come up with the following customization that mimics fixed positioning, but is instead rendered inline:
You just need to apply some CSS on your badge container:
.badge-container {
display: flex;
justify-content: flex-end;
overflow: hidden;
width: 70px;
height: 60px;
margin: 0 auto;
box-shadow: 0 0 4px #ddd;
transition: linear 100ms width;
}
.badge-container:hover {
width: 256px;
}
I think that's as far as you can legally push it.
A: Yes, you can do it. you can either use css or javascript to hide the reCaptcha v3 badge.
*
*The CSS Way
use display: none or visibility: hidden to hide the reCaptcha batch. It's easy and quick.
.grecaptcha-badge {
display:none !important;
}
*The Javascript Way
var el = document.querySelector('.grecaptcha-badge');
el.style.display = 'none';
Hiding the badge is valid, according to the google policy and answered in faq here. It is recommended to show up the privacy policy and terms of use from google as shown below.
A: I decided to hide the badge on all pages except my contact page (using Wordpress):
/* Hides the reCAPTCHA on every page */
.grecaptcha-badge {
visibility: hidden !important;
}
/* Shows the reCAPTCHA on the Contact page */
/* Obviously change the page number to your own */
.page-id-17 .grecaptcha-badge {
visibility: visible !important;
}
I'm not a web developer so please correct me if there's something wrong.
EDIT: Updated to use visibility instead of display.
A: I saw next comment about this
It's also helpful to place the badge inline if you want to apply your own CSS to it. But do remember that you agreed to show Google's Terms and conditions when you registered for an API key - so don't hide it, please. And while it is possible to make the badge disappear completely with CSS, we wouldn't recommend it.
A: Recaptcha contact form 7 and Recaptcha v3 solution.
body:not(.page-id-20) .grecaptcha-badge {
display: none;
}
More Than One Contact Form Page?
body:not(.page-id-12):not(.page-id-43) .grecaptcha-badge {
display: none;
}
You can add more “nots” if you have more contact form pages.
body:not(.page-id-45):not(.page-id-78):not(.page-id-98) .grecaptcha-badge {
display: none;
}
Make sure that your body section will like this:
<body>
Change it so that it looks like this:
<body <?php body_class(); ?>>
|
stackoverflow
|
{
"language": "en",
"length": 1313,
"provenance": "stackexchange_0000F.jsonl.gz:865115",
"question_score": "237",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543157"
}
|
fb8ea0d0d253c2f630f2de40ffc8578a4b1131d3
|
Stackoverflow Stackexchange
Q: NameError: name 'QuerySet' is not defined I'm getting the NameError: name 'QuerySet' is not defined error in below line.
QuerySet(query=MappingTraineeQ.objects.filter(date__range=(startdate,enddate)).query, model=MappingTraineeQ)
I'm not getting the problem, do I need to import anything for this? or I missed something.
please help me with the above.
Thanks
A: You could import Queryset like,
from django.db.models.query import QuerySet
|
Q: NameError: name 'QuerySet' is not defined I'm getting the NameError: name 'QuerySet' is not defined error in below line.
QuerySet(query=MappingTraineeQ.objects.filter(date__range=(startdate,enddate)).query, model=MappingTraineeQ)
I'm not getting the problem, do I need to import anything for this? or I missed something.
please help me with the above.
Thanks
A: You could import Queryset like,
from django.db.models.query import QuerySet
|
stackoverflow
|
{
"language": "en",
"length": 56,
"provenance": "stackexchange_0000F.jsonl.gz:865129",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543218"
}
|
26bbf668cc42f4e4cac580421ae06da730f014f2
|
Stackoverflow Stackexchange
Q: easyRTC Error connecting to socket.io server I am trying to run nodejs on php file.
<script src="htps://xxx.com:8445/socket.io/socket.io.js"></script>
<script src="htps://xxx.com:8445/easyrtc/easyrtc.js" type="text/javascript"></script>
Server.js :
// Load required modules
var https = require("https"); // https server core module
var fs = require("fs"); // file system core module
var express = require("express"); // web framework external module
var io = require("socket.io"); // web socket external module
var easyrtc = require("../"); // EasyRTC external module
// Setup and configure Express http server. Expect a subfolder called "static" to be the web root.
var httpApp = express();
httpApp.use(express.static(__dirname + ":8445"));
// Start Express https server on port 8445
var webServer = https.createServer(
{
key: fs.readFileSync("/etc/apache2/ssl/xxx.com.key"),
cert: fs.readFileSync("/etc/apache2/ssl/xxx.com.crt")
},
httpApp).listen(8445);
// Start Socket.io so it attaches itself to Express server
var socketServer = io.listen(webServer, {"log level":1});
// Start EasyRTC server
var rtc = easyrtc.listen(httpApp, socketServer);
ERROR :
Nodejs works but i got an error like that. If i didnt use PHP, it can work.
A: For anyone coming from google with this problem, easyRTC was assuming the http server and the ws server was on the same url/port. To fix the issue the method easyrtc.setSocketUrl("https://thedomain.com.tr:8445"); was required to run after including easyRTC
|
Q: easyRTC Error connecting to socket.io server I am trying to run nodejs on php file.
<script src="htps://xxx.com:8445/socket.io/socket.io.js"></script>
<script src="htps://xxx.com:8445/easyrtc/easyrtc.js" type="text/javascript"></script>
Server.js :
// Load required modules
var https = require("https"); // https server core module
var fs = require("fs"); // file system core module
var express = require("express"); // web framework external module
var io = require("socket.io"); // web socket external module
var easyrtc = require("../"); // EasyRTC external module
// Setup and configure Express http server. Expect a subfolder called "static" to be the web root.
var httpApp = express();
httpApp.use(express.static(__dirname + ":8445"));
// Start Express https server on port 8445
var webServer = https.createServer(
{
key: fs.readFileSync("/etc/apache2/ssl/xxx.com.key"),
cert: fs.readFileSync("/etc/apache2/ssl/xxx.com.crt")
},
httpApp).listen(8445);
// Start Socket.io so it attaches itself to Express server
var socketServer = io.listen(webServer, {"log level":1});
// Start EasyRTC server
var rtc = easyrtc.listen(httpApp, socketServer);
ERROR :
Nodejs works but i got an error like that. If i didnt use PHP, it can work.
A: For anyone coming from google with this problem, easyRTC was assuming the http server and the ws server was on the same url/port. To fix the issue the method easyrtc.setSocketUrl("https://thedomain.com.tr:8445"); was required to run after including easyRTC
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:865157",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543286"
}
|
c9f5557e0884aabeea69f2a456a87ed291fc27f4
|
Stackoverflow Stackexchange
Q: "cannot take the address of" and "cannot call pointer method on" This compiles and works:
diff := projected.Minus(c.Origin)
dir := diff.Normalize()
This does not (yields the errors in the title):
dir := projected.Minus(c.Origin).Normalize()
Can someone help me understand why? (learning Go)
Here are those methods:
// Minus subtracts another vector from this one
func (a *Vector3) Minus(b Vector3) Vector3 {
return Vector3{a.X - b.X, a.Y - b.Y, a.Z - b.Z}
}
// Normalize makes the vector of length 1
func (a *Vector3) Normalize() Vector3 {
d := a.Length()
return Vector3{a.X / d, a.Y / d, a.Z / d}
}
A: The accepted answer is really long so I'm just going to post what helped me:
I got this error regarding this line:
services.HashingServices{}.Hash("blabla")
so I just changed it to:
(&services.HashingServices{}).Hash("blabla")
|
Q: "cannot take the address of" and "cannot call pointer method on" This compiles and works:
diff := projected.Minus(c.Origin)
dir := diff.Normalize()
This does not (yields the errors in the title):
dir := projected.Minus(c.Origin).Normalize()
Can someone help me understand why? (learning Go)
Here are those methods:
// Minus subtracts another vector from this one
func (a *Vector3) Minus(b Vector3) Vector3 {
return Vector3{a.X - b.X, a.Y - b.Y, a.Z - b.Z}
}
// Normalize makes the vector of length 1
func (a *Vector3) Normalize() Vector3 {
d := a.Length()
return Vector3{a.X / d, a.Y / d, a.Z / d}
}
A: The accepted answer is really long so I'm just going to post what helped me:
I got this error regarding this line:
services.HashingServices{}.Hash("blabla")
so I just changed it to:
(&services.HashingServices{}).Hash("blabla")
A: The Vector3.Normalize() method has a pointer receiver, so in order to call this method, a pointer to Vector3 value is required (*Vector3). In your first example you store the return value of Vector3.Minus() in a variable, which will be of type Vector3.
Variables in Go are addressable, and when you write diff.Normalize(), this is a shortcut, and the compiler will automatically take the address of the diff variable to have the required receiver value of type *Vector3 in order to call Normalize(). So the compiler will "transform" it to
(&diff).Normalize()
This is detailed in Spec: Calls:
A method call x.m() is valid if the method set of (the type of) x contains m and the argument list can be assigned to the parameter list of m. If x is addressable and &x's method set contains m, x.m() is shorthand for (&x).m().
The reason why your second example doesn't work is because return values of function and method calls are not addressable, so the compiler is not able to do the same here, the compiler is not able to take the address of the return value of the Vector3.Minus() call.
What is addressable is exactly listed in the Spec: Address operators:
The operand must be addressable, that is, either a variable, pointer indirection, or slice indexing operation; or a field selector of an addressable struct operand; or an array indexing operation of an addressable array. As an exception to the addressability requirement, x [in the expression of &x] may also be a (possibly parenthesized) composite literal.
See related questions:
How to get the pointer of return value from function call?
How can I store reference to the result of an operation in Go?
Possible "workarounds"
"Easiest" (requiring the least change) is simply to assign to a variable, and call the method after that. This is your first working solution.
Another way is to modify the methods to have a value receiver (instead of pointer receiver), so that there is no need to take the address of the return values of the methods, so calls can be "chained". Note that this might not be viable if a method needs to modify the receiver, as that is only possible if it is a pointer (as the receiver is passed just like any other parameters – by making a copy –, and if it's not a pointer, you could only modify the copy).
Another way is to modify the return values to return pointers (*Vector3) instead of Vector3. If the return value is already a pointer, no need to take its address as it's good as-is for the receiver to a method that requires a pointer receiver.
You may also create a simple helper function which returns its address. It could look something like this:
func pv(v Vector3) *Vector3 {
return &v
}
Using it:
dir := pv(projected.Minus(c.Origin)).Normalize()
This could also be a method of Vector3, e.g.:
func (v Vector3) pv() *Vector3 {
return &v
}
And then using it:
dir := projected.Minus(c.Origin).pv().Normalize()
Some notes:
If your type consists of 3 float64 values only, you should not see significant performance differences. But you should be consistent about your receiver and result types. If most of your methods have pointer receivers, so should all of them. If most of your methods return pointers, so should all of them.
|
stackoverflow
|
{
"language": "en",
"length": 685,
"provenance": "stackexchange_0000F.jsonl.gz:865183",
"question_score": "28",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543374"
}
|
711e2b993087b4a84f4f00c3c8e9bf36ebb2732e
|
Stackoverflow Stackexchange
Q: Fixed size array alias with "using" syntax Consider the following typedef:
typedef int int_array_of_size_4[4];
Is there an equivalent with the newer "using" syntax? If yes, what is it?
EDIT: This isn't a duplicate of this question, since that was about aliasing an array of unknown bound.
A: Yes:
using int_array_of_size_4 = int[4];
live example on wandbox
|
Q: Fixed size array alias with "using" syntax Consider the following typedef:
typedef int int_array_of_size_4[4];
Is there an equivalent with the newer "using" syntax? If yes, what is it?
EDIT: This isn't a duplicate of this question, since that was about aliasing an array of unknown bound.
A: Yes:
using int_array_of_size_4 = int[4];
live example on wandbox
|
stackoverflow
|
{
"language": "en",
"length": 57,
"provenance": "stackexchange_0000F.jsonl.gz:865185",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543378"
}
|
07f7ea15039df27051d3763caf7b12b6f8ed8495
|
Stackoverflow Stackexchange
Q: How to use helpers in handlebars with webpack(handlebars-loader) I am using Handlebars in my project and bundling templates using webpack. I am using handlebars-loader to compile templates. I got issue when I created a small helper. Webpack shows this error when I use helper in my template:
You specified knownHelpersOnly, but used the unknown helper withCurrentItem - 5:4
This is my code:
Webapck:
{
test : /\.(tpl|hbs)$/,
loader : "handlebars-loader?helperDirs[]=" + __dirname + "templates/helpers"
// use : 'handlebars-loader?helperDirs[]=false' + __dirname + 'templates/helpers'
},
Helper(project/templates/helpers/withCurrentItem.js):
export default function (context, options) {
const contextWithCurrentItem = context
contextWithCurrentItem.currentItem = options.hash.currentItem
return options.fn(contextWithCurrentItem)
}
Template file(project/templates/products.tpl):
{{> partials/filters}}
<ul class="u-4-5">
{{#each data.products}}
{{> partials/product}}
{{withCurrentItem ../styles currentItem=this}}
{{/each}}
</ul>
I tried to resolve the problem and searched over the internet but I couldn't find any thing. This is what I have tried to:
*
*Add helperDirs[] query param to loader as:
loader : "handlebars-loader?helperDirs[]=" + __dirname + "templates/helpers"
*Add helpers directory path to resolve.modules property of webpack config file
Sadly, none of them work.
A: [email protected] and [email protected]:
{
test: /\.hbs$/,
loader: 'handlebars-loader',
options: {
helperDirs: path.join(__dirname, 'path/to/helpers'),
precompileOptions: {
knownHelpersOnly: false,
},
},
},
Update 2021: also works with webpack@4+.
|
Q: How to use helpers in handlebars with webpack(handlebars-loader) I am using Handlebars in my project and bundling templates using webpack. I am using handlebars-loader to compile templates. I got issue when I created a small helper. Webpack shows this error when I use helper in my template:
You specified knownHelpersOnly, but used the unknown helper withCurrentItem - 5:4
This is my code:
Webapck:
{
test : /\.(tpl|hbs)$/,
loader : "handlebars-loader?helperDirs[]=" + __dirname + "templates/helpers"
// use : 'handlebars-loader?helperDirs[]=false' + __dirname + 'templates/helpers'
},
Helper(project/templates/helpers/withCurrentItem.js):
export default function (context, options) {
const contextWithCurrentItem = context
contextWithCurrentItem.currentItem = options.hash.currentItem
return options.fn(contextWithCurrentItem)
}
Template file(project/templates/products.tpl):
{{> partials/filters}}
<ul class="u-4-5">
{{#each data.products}}
{{> partials/product}}
{{withCurrentItem ../styles currentItem=this}}
{{/each}}
</ul>
I tried to resolve the problem and searched over the internet but I couldn't find any thing. This is what I have tried to:
*
*Add helperDirs[] query param to loader as:
loader : "handlebars-loader?helperDirs[]=" + __dirname + "templates/helpers"
*Add helpers directory path to resolve.modules property of webpack config file
Sadly, none of them work.
A: [email protected] and [email protected]:
{
test: /\.hbs$/,
loader: 'handlebars-loader',
options: {
helperDirs: path.join(__dirname, 'path/to/helpers'),
precompileOptions: {
knownHelpersOnly: false,
},
},
},
Update 2021: also works with webpack@4+.
A: For me, none of these approaches worked. I used runtime option to create my own instance of Handlebars (thanks to this comment):
webpack.config.js
module: {
rules: [
{
test: /\.(handlebars|hbs)$/,
loader: 'handlebars-loader',
options: {
runtime: path.resolve(__dirname, 'path/to/handlebars'),
},
},
path/to/handlebars.js
const Handlebars = require('handlebars/runtime');
Handlebars.registerHelper('loud', function(aString) {
return aString.toUpperCase();
});
module.exports = Handlebars;
A: Following config worked for me in webpack 4
// webpack
{
test: /\.hbs$/,
use: [{
loader: 'handlebars-loader?runtime=handlebars/runtime',
options: {
precompileOptions: {
knownHelpersOnly: false,
}
}
}]
}
// helpers/ifEq.js
module.exports = function (a, b, opts) {
if (a === b) {
return opts.fn(this);
}
return opts.inverse(this);
}
|
stackoverflow
|
{
"language": "en",
"length": 300,
"provenance": "stackexchange_0000F.jsonl.gz:865203",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543434"
}
|
1fe81f6ab5162a51e20415380bce3c035f65491a
|
Stackoverflow Stackexchange
Q: Why does invokeLater execute in the main thread? I just encountered this "bug", but I'm not sure if this is intended:
Code:
public static Object someMethod(){
assert SwingUtilities.isEventDispatchThread();
return new Object();
}
public static void main(String[] args){
SwingUtilities.invokeLater(() -> someMethod().toString());//First Example
SwingUtilities.invokeLater(someMethod()::toString);//Second Example
}
In the first example someMethod is being executed on the swing Thread, but in the second example it is not, although it should be in my opinion.
Is this a bug or is this intended?
A: This is not related to Swing, it's what happens when using method references and lambdas behind the scenes.
A simpler example:
public static void main(String[] args) {
Stream.of(1, 2, 3).map(initMapper()::inc);
Stream.of(1, 2, 3).map(x -> initMapper().inc(x));
}
private static Mapper initMapper() {
System.out.println("init");
return new Mapper();
}
static class Mapper {
public int inc(int x) {
return x + 1;
}
}
You will get a single init output here; notice that there is no terminal operation for the stream.
|
Q: Why does invokeLater execute in the main thread? I just encountered this "bug", but I'm not sure if this is intended:
Code:
public static Object someMethod(){
assert SwingUtilities.isEventDispatchThread();
return new Object();
}
public static void main(String[] args){
SwingUtilities.invokeLater(() -> someMethod().toString());//First Example
SwingUtilities.invokeLater(someMethod()::toString);//Second Example
}
In the first example someMethod is being executed on the swing Thread, but in the second example it is not, although it should be in my opinion.
Is this a bug or is this intended?
A: This is not related to Swing, it's what happens when using method references and lambdas behind the scenes.
A simpler example:
public static void main(String[] args) {
Stream.of(1, 2, 3).map(initMapper()::inc);
Stream.of(1, 2, 3).map(x -> initMapper().inc(x));
}
private static Mapper initMapper() {
System.out.println("init");
return new Mapper();
}
static class Mapper {
public int inc(int x) {
return x + 1;
}
}
You will get a single init output here; notice that there is no terminal operation for the stream.
A: To me it seems like a misunderstanding on your side
The first line is like saying: "Ok, Swing, what I want you to invokeLater is someMethod().toString()". So Swing executes it
The second line is like saying: "Ok, Swing, what I want you to invokeLater is the method toString() of the object returned by the method someMethod()". A someMethod() method that I am executing right now
So the result is completely logical to me
Just keep in mind that before evaluating a function (in this case invokeLater) Java needs to evaluate all arguments. So in the first case Java evaluate a lambda function (no need to execute it) and in the second case it encounters a method invocation so it needs to execute it
|
stackoverflow
|
{
"language": "en",
"length": 284,
"provenance": "stackexchange_0000F.jsonl.gz:865213",
"question_score": "24",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543465"
}
|
58b0a191de6e356af114cb35c24abebf24c20d2a
|
Stackoverflow Stackexchange
Q: Index specification in printf format specifier In the following program, printf() function print according to argument index specification.
#include <stdio.h>
int main()
{
printf("%3$d %4$f %2$d %1$d\n", 1, 2, 3, 4.5);
}
Output:
3 4.500000 2 1
So, I have question, Is it valid for any compiler or support only GCC compiler?
A: As you suspect, numbered argument conversion specifications (that is, n$) are not required by the C standard. But they are not idiosyncratic to the Gnu implementation; they are required by the POSIX standard (specification here) and have been since at least the 1997 version 2.
Most Unix and unix-like implementations feature some measure of Posix compliance, and numbered argument specifications are not a recent addition. So you will likely find support in most current Unix and Unix-like platforms, including any which use the Gnu standard C library (Linux) or the FreeBSD standard C library (Mac OS X). However, the (native) Windows C standard library (which is not Posix-compliant) only provides support for numbered argument specifications if you use the *printf_p family of functions. See here and here.)
|
Q: Index specification in printf format specifier In the following program, printf() function print according to argument index specification.
#include <stdio.h>
int main()
{
printf("%3$d %4$f %2$d %1$d\n", 1, 2, 3, 4.5);
}
Output:
3 4.500000 2 1
So, I have question, Is it valid for any compiler or support only GCC compiler?
A: As you suspect, numbered argument conversion specifications (that is, n$) are not required by the C standard. But they are not idiosyncratic to the Gnu implementation; they are required by the POSIX standard (specification here) and have been since at least the 1997 version 2.
Most Unix and unix-like implementations feature some measure of Posix compliance, and numbered argument specifications are not a recent addition. So you will likely find support in most current Unix and Unix-like platforms, including any which use the Gnu standard C library (Linux) or the FreeBSD standard C library (Mac OS X). However, the (native) Windows C standard library (which is not Posix-compliant) only provides support for numbered argument specifications if you use the *printf_p family of functions. See here and here.)
|
stackoverflow
|
{
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:865239",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543540"
}
|
5debd3ca1d4ec35716b0dabecd1785c8c4a6e27c
|
Stackoverflow Stackexchange
Q: The module root could not be found. There is nothing to output Running a terraform output in my root Terraform directory I get:
The module root could not be found. There is nothing to output.
I have the following files:
iam.tf:
resource "aws_iam_user" "a_user" {
name = "a_user"
}
output.tf:
data "aws_caller_identity" "current" {}
output "account_id" {
value = "${data.aws_caller_identity.current.account_id}"
}
This https://www.terraform.io/docs/modules/index.html says:
Root module That is the current working directory when you run terraform apply or get, holding the Terraform configuration files. It is itself a valid module.
Any idea why the error message and how to fix?
A: Terraform refers root module from terraform.tfstate file.
This file conatains all info about your last known state from .tf files along with output variables.
Which is generated after first execution terraform apply command into current directory.
Simply run terraform apply
, then terraform output will shows your output variables.
|
Q: The module root could not be found. There is nothing to output Running a terraform output in my root Terraform directory I get:
The module root could not be found. There is nothing to output.
I have the following files:
iam.tf:
resource "aws_iam_user" "a_user" {
name = "a_user"
}
output.tf:
data "aws_caller_identity" "current" {}
output "account_id" {
value = "${data.aws_caller_identity.current.account_id}"
}
This https://www.terraform.io/docs/modules/index.html says:
Root module That is the current working directory when you run terraform apply or get, holding the Terraform configuration files. It is itself a valid module.
Any idea why the error message and how to fix?
A: Terraform refers root module from terraform.tfstate file.
This file conatains all info about your last known state from .tf files along with output variables.
Which is generated after first execution terraform apply command into current directory.
Simply run terraform apply
, then terraform output will shows your output variables.
A: You haven't added your module config above, but assuming you have a module file then you have to tell terraform about the source. if the source is a sub directory called example in the same location as iam.tf and output.tf, then you have to add module as bellow, then run terraform apply from the directory where output.tf and iam.tf are:
module "consul" {
source = "./example"
}
If your output is a remote location (e.g github) then source has to be as below
module "consul" {
source = "github.com/some-git.git"
}
Then you have to run "terraform get" to download your module. Then "terraform apply" to apply the module, then "terraform output" to list the output you specified above
A: The problem is you have not added your module config file. Something along
module "test_module" {
source = "./test_module"
}
You have to make sure the module config exists and also the source is valid. To get output, you need a state file which is created after running terraform apply. Looks like you either dont have one or you have no output in your state file.
|
stackoverflow
|
{
"language": "en",
"length": 337,
"provenance": "stackexchange_0000F.jsonl.gz:865245",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543555"
}
|
79788e04d5e5c66ab873dd51313ead63f1625fa1
|
Stackoverflow Stackexchange
Q: Jasper Reports - java.lang.ClassNotFoundException: oracle.jdbc.OracleBlob when using BLOB as Detail Field I want to show BLOB Images in a report. But if I drop the BLOB Field into the detail band I just get the error message: java.lang.ClassNotFoundException: oracle.jdbc.OracleBlob cannot be found by net.sf.jasperreports_6.2.0.final
I added the ojdbc6.jar to the classpath of my database connection before.
Does anybody have a solution here?
Thanks for help guys!
A: Recently I had asimilar issue and the solution was to upgrade the oracle driver. I changed ojdbc14.jar by ojdbc6.jar and the problem was solved.
|
Q: Jasper Reports - java.lang.ClassNotFoundException: oracle.jdbc.OracleBlob when using BLOB as Detail Field I want to show BLOB Images in a report. But if I drop the BLOB Field into the detail band I just get the error message: java.lang.ClassNotFoundException: oracle.jdbc.OracleBlob cannot be found by net.sf.jasperreports_6.2.0.final
I added the ojdbc6.jar to the classpath of my database connection before.
Does anybody have a solution here?
Thanks for help guys!
A: Recently I had asimilar issue and the solution was to upgrade the oracle driver. I changed ojdbc14.jar by ojdbc6.jar and the problem was solved.
A: If you've got a maven project you might be missing in your pom.xml the following:
<properties>
<ojdbc6.version>11.2.0.2.0</ojdbc6.version>
</properties>
<dependency>
<groupId>com.oracle.ojdbc6</groupId>
<artifactId>ojdbc6</artifactId>
<version>${ojdbc6.version}</version>
</dependency>
|
stackoverflow
|
{
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:865254",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543585"
}
|
ee36fac1a5961d835a689aaf63b5ba993107cdc6
|
Stackoverflow Stackexchange
Q: How to enable a pop up for authentication for electron? I am creating an electron app that accesses a url. And when navigated to the URL the user clicks on a button and are redirected to a URL that displays this pop up in Chrome.
How can I enable/show this popup in electron? It doesn't seem to enable it by default.
A: What you see on the picture is that Chrome opens a popup for handle authentication event.
However, Electron doesn't make such popup by default, as it stated in the documentation of 'login' event
The default behavior is to cancel all authentications, to override this you should prevent the default behavior with event.preventDefault() and call callback(username, password) with the credentials.
This means, you should handle 'login' event of your webContents manually and open a popup window by yourself or do whatever you want.
|
Q: How to enable a pop up for authentication for electron? I am creating an electron app that accesses a url. And when navigated to the URL the user clicks on a button and are redirected to a URL that displays this pop up in Chrome.
How can I enable/show this popup in electron? It doesn't seem to enable it by default.
A: What you see on the picture is that Chrome opens a popup for handle authentication event.
However, Electron doesn't make such popup by default, as it stated in the documentation of 'login' event
The default behavior is to cancel all authentications, to override this you should prevent the default behavior with event.preventDefault() and call callback(username, password) with the credentials.
This means, you should handle 'login' event of your webContents manually and open a popup window by yourself or do whatever you want.
|
stackoverflow
|
{
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:865274",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543664"
}
|
5f7c4a33ff03a91f769fa0da09d69357890b7bf8
|
Stackoverflow Stackexchange
Q: vscode - Merge one branch into another Is it possible to merge one branch into another using the vscode interface and not the terminal? or is there any extension that makes this possible?
using vscode v1.13.0
Thanks in advance.
A: I have created an extension which is available under the name: Git Merger in the vscode marketplace.
|
Q: vscode - Merge one branch into another Is it possible to merge one branch into another using the vscode interface and not the terminal? or is there any extension that makes this possible?
using vscode v1.13.0
Thanks in advance.
A: I have created an extension which is available under the name: Git Merger in the vscode marketplace.
A: Use Git: merge branch command
VSCode now has built-in branch merge support.
see In Visual Studio Code How do I merge between two local branches?
Press Ctrl/Cmd+Shift+P (Or View > Command Palette...) and look for Git: merge branch.
You select the branch to merge from (if you have a workspace open, the command lets you choose the project within the workspace).
Conflict management is also implemented, highlighting the Git conflict markers (see link above).
The branch gets merged with the commit message "Merged branch '[branchname]'"
A: Nowadays (2017-06-14) you can't, there is no extension to do so in VSCode. If you have time and skills, you can try to write your own extension, you're probably not the only one to want a feature like that, so you'll find contributors if you host it on Github for example.
You'll use the terminal a bit more. Sorry.
|
stackoverflow
|
{
"language": "en",
"length": 204,
"provenance": "stackexchange_0000F.jsonl.gz:865301",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543755"
}
|
904ed452c66bcd340734a9aa35282d99404b78ae
|
Stackoverflow Stackexchange
Q: Do Lombok Builders use Reflection? Do Lombok Builders use Reflection?
or does it also add the necessary code pre-compile?
A: There's definitely no reflection at all as Lombok uses sort of code generation. It works on the AST (Abstract Syntax Tree) level, i.e., it works with the parsed source, somewhere between the source code and the bytecode.
A part of Lombok is delombok, which shows you rather exactly what code was generated.
|
Q: Do Lombok Builders use Reflection? Do Lombok Builders use Reflection?
or does it also add the necessary code pre-compile?
A: There's definitely no reflection at all as Lombok uses sort of code generation. It works on the AST (Abstract Syntax Tree) level, i.e., it works with the parsed source, somewhere between the source code and the bytecode.
A part of Lombok is delombok, which shows you rather exactly what code was generated.
A: The official site is pretty clear on how @Builder works:
@Builder can be placed on a class, or on a constructor, or on a method. While the "on a class" and "on a constructor" mode are the most common use-case, @Builder is most easily explained with the "method" use-case.
A method annotated with @Builder (from now on called the target) causes the following 7 things to be generated:
*
*An inner static class named FooBuilder, with the same type arguments as the static method (called the builder).
*In the builder: One private non-static non-final field for each parameter of the target.
*In the builder: A package private no-args empty constructor.
*In the builder: A 'setter'-like method for each parameter of the target: It has the same type as that parameter and the same name. It returns the builder itself, so that the setter calls can be chained, as in the above example.
*In the builder: A build() method which calls the method, passing in each field. It returns the same type that the target returns.
*In the builder: A sensible toString() implementation.
*In the class containing the target: A builder() method, which creates a new instance of the builder.
So for example, it might look like this:
public static class FooBuilder {
private String abc;
private String def;
FooBuilder() {}
public FooBuilder abc(String abc) {
this.abc = abc;
return this;
}
public FooBuilder def(String def) {
this.def = def;
return this;
}
public Foo build() {
return new Foo(abc, def);
}
@Override
public String toString() {
return "FooBuilder{abc: " + abc + ", def: " + def + "}";
}
}
|
stackoverflow
|
{
"language": "en",
"length": 345,
"provenance": "stackexchange_0000F.jsonl.gz:865311",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543785"
}
|
0c0807b43368a9d264276f954dbcda3d7b7a5e6d
|
Stackoverflow Stackexchange
Q: Google app engine No module named appengine.api Deploying python app engine project.
It works when I deploy locally in a virtual environment, but when deploying to google app engine I get an error (in the terminal):
from google.appengine.api import memcache
ImportError: No module named appengine.api
I can import google, but I cannot import google.appengine o r anything inside it.
my app.yaml
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 2
threadsafe: true
- url: /.*
script: main.app
secure: always
requirements.txt
Flask==0.12.1
gunicorn==19.7.1
numpy==1.12.1
scipy==0.19.0
Pillow==4.1.1
scikit_learn==0.17
python-dateutil
webapp2==3.0.0b1
google-cloud-storage==1.1.1
pycrypto==2.6
google-api-python-client==1.5.0
How do I solve this issue?
Edit:
Additional info:
Google Cloud SDK 158.0.0
app-engine-python 1.9.54
deploying from macOS
|
Q: Google app engine No module named appengine.api Deploying python app engine project.
It works when I deploy locally in a virtual environment, but when deploying to google app engine I get an error (in the terminal):
from google.appengine.api import memcache
ImportError: No module named appengine.api
I can import google, but I cannot import google.appengine o r anything inside it.
my app.yaml
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 2
threadsafe: true
- url: /.*
script: main.app
secure: always
requirements.txt
Flask==0.12.1
gunicorn==19.7.1
numpy==1.12.1
scipy==0.19.0
Pillow==4.1.1
scikit_learn==0.17
python-dateutil
webapp2==3.0.0b1
google-cloud-storage==1.1.1
pycrypto==2.6
google-api-python-client==1.5.0
How do I solve this issue?
Edit:
Additional info:
Google Cloud SDK 158.0.0
app-engine-python 1.9.54
deploying from macOS
|
stackoverflow
|
{
"language": "en",
"length": 113,
"provenance": "stackexchange_0000F.jsonl.gz:865314",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543796"
}
|
893dc082948074e22e1a550eb92d2a100c9f9728
|
Stackoverflow Stackexchange
Q: WEKA FP-growth association rules not finding rules For a report I have to find association rules of a data set of transactions. I downloaded this data set:
http://archive.ics.uci.edu/ml/datasets/online+retail
Then I deleted some columns, converted to nominal values and normalized and then
I got this: https://ufile.io/gz3do
So I thought I had a data set with transactions on which I could use FP-growth and Apriori but I'm not getting any rules.
It just tells me: No rules found!
Can someone please explain to me if and what I'm doing wrong?
A: one reason could be that your support and/or confidence value are too high. try low ones. e.g. a support and confidence level of 0.001%. another reason could be that your data set just doesn't contain any association rules. try another data set which certainly contains association rules from a set minimum support and confidence value.
|
Q: WEKA FP-growth association rules not finding rules For a report I have to find association rules of a data set of transactions. I downloaded this data set:
http://archive.ics.uci.edu/ml/datasets/online+retail
Then I deleted some columns, converted to nominal values and normalized and then
I got this: https://ufile.io/gz3do
So I thought I had a data set with transactions on which I could use FP-growth and Apriori but I'm not getting any rules.
It just tells me: No rules found!
Can someone please explain to me if and what I'm doing wrong?
A: one reason could be that your support and/or confidence value are too high. try low ones. e.g. a support and confidence level of 0.001%. another reason could be that your data set just doesn't contain any association rules. try another data set which certainly contains association rules from a set minimum support and confidence value.
|
stackoverflow
|
{
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:865317",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543809"
}
|
2a03318cf7acf88f2cf0469d1770c0e068b1ba89
|
Stackoverflow Stackexchange
Q: How can I get operating system Time zone in Java? The TimeZone.getDefault() returns the System Time zone until it's changed.
Sample 1:
System.out.println(TimeZone.getDefault());
Result:
Europe/Kaliningrad
It is system time zone.
Sample 2:
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Kolkata"));
System.out.println(TimeZone.getDefault());
Result:
Asia/Kolkata
It isn't system time zone, system time zone is still Europe/Kaliningrad.
So how can I get system time zone even after change default DateTimeZone.
A: You can check system property user.timezone:
System.getProperty("user.timezone")
|
Q: How can I get operating system Time zone in Java? The TimeZone.getDefault() returns the System Time zone until it's changed.
Sample 1:
System.out.println(TimeZone.getDefault());
Result:
Europe/Kaliningrad
It is system time zone.
Sample 2:
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Kolkata"));
System.out.println(TimeZone.getDefault());
Result:
Asia/Kolkata
It isn't system time zone, system time zone is still Europe/Kaliningrad.
So how can I get system time zone even after change default DateTimeZone.
A: You can check system property user.timezone:
System.getProperty("user.timezone")
A: Store the value of TimeZone.getDefault() in a variable before following codes
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Kolkata"));
System.out.println(TimeZone.getDefault());
and use the variable later.
|
stackoverflow
|
{
"language": "en",
"length": 88,
"provenance": "stackexchange_0000F.jsonl.gz:865323",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543829"
}
|
b75d5ca05e8af66719e7ef731598b2a5f9f300ef
|
Stackoverflow Stackexchange
Q: how to render DT::datatables in a pdf using rmarkdown? How can I display DT::datatable objects from a rmarkdown script onto a pdf document? My code so far is breaks down with the following error:
processing file: reportCopy.Rmd
output file: reportCopy.knit.md
Functions that produce HTML output found in document targeting latex output.
Please change the output type of this document to HTML.
Including always_allow_html: yes in the YAML header suppresses the error, but nothing appears on the pdf.
I would be grateful for any help. My code is currently:
---
title: "DT"
output: pdf_document
---
### Chart 1
```{r}
DT::datatable(head(mtcars))
```
( I don't know if it matters, but my datatables are in fact created in a shiny application. Ideally, I would have liked to have the prerendered tables simply dumped into the rmarkdown script... but I switched tactic and now try to render the tables directly in the rmarkdown code)
A: Since knitr v1.13, HTML widgets will be rendered automatically as screenshots taken via the webshot package.
You need to install the webshot package and PhantomJS:
install.packages("webshot")
webshot::install_phantomjs()
(see https://bookdown.org/yihui/bookdown/html-widgets.html)
|
Q: how to render DT::datatables in a pdf using rmarkdown? How can I display DT::datatable objects from a rmarkdown script onto a pdf document? My code so far is breaks down with the following error:
processing file: reportCopy.Rmd
output file: reportCopy.knit.md
Functions that produce HTML output found in document targeting latex output.
Please change the output type of this document to HTML.
Including always_allow_html: yes in the YAML header suppresses the error, but nothing appears on the pdf.
I would be grateful for any help. My code is currently:
---
title: "DT"
output: pdf_document
---
### Chart 1
```{r}
DT::datatable(head(mtcars))
```
( I don't know if it matters, but my datatables are in fact created in a shiny application. Ideally, I would have liked to have the prerendered tables simply dumped into the rmarkdown script... but I switched tactic and now try to render the tables directly in the rmarkdown code)
A: Since knitr v1.13, HTML widgets will be rendered automatically as screenshots taken via the webshot package.
You need to install the webshot package and PhantomJS:
install.packages("webshot")
webshot::install_phantomjs()
(see https://bookdown.org/yihui/bookdown/html-widgets.html)
A: You cannot usedatatable in pdf_document (datatable is interactive, pdf is static), only in html_document!
The only possibility for PDF is to use the kable or for example pandoc.table
--> if You really wanna get the look of datatable and as You said datatable is created in a shiny application, then You can have a look at the webshot package, which is going to create a screenshot of Your datatable from shiny app which You can use further in pdf as a image.
|
stackoverflow
|
{
"language": "en",
"length": 264,
"provenance": "stackexchange_0000F.jsonl.gz:865335",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543858"
}
|
2db68eb0afb4af6067e43d7d29139074ba7cade2
|
Stackoverflow Stackexchange
Q: Finding minimum value in a list recursively I'm trying to find the minimum value in a list recursively. Since I'm still new in adapting to the recursive method, I would like to seek some help regarding my line of code:
listA = [9,-2,6,1,80,9,-2]
def findMinimum(l):
if len(l) == 1:
return l
else:
minNumber = findMinimum(l-1)
min = listA[0]
for i in listA:
if listA[i]<listA[i+1]:
min = listA[i]
return min
findMinimum(listA)
I'll appreciate if anyone could help me out as I'm relatively new to recursion and my understanding is definitely up to standard.
A: The first part of your function is correct. But you should change the second part like this:
listA = [9,-2,6,1,80,9,-2]
def findMinimum(l):
if len(l) == 1:
return l[0]
else:
return min(l[0], findMinimum(l[1:]))
findMinimum(listA)
Remember, recursive functions comes to make our codes simpler and easier.
|
Q: Finding minimum value in a list recursively I'm trying to find the minimum value in a list recursively. Since I'm still new in adapting to the recursive method, I would like to seek some help regarding my line of code:
listA = [9,-2,6,1,80,9,-2]
def findMinimum(l):
if len(l) == 1:
return l
else:
minNumber = findMinimum(l-1)
min = listA[0]
for i in listA:
if listA[i]<listA[i+1]:
min = listA[i]
return min
findMinimum(listA)
I'll appreciate if anyone could help me out as I'm relatively new to recursion and my understanding is definitely up to standard.
A: The first part of your function is correct. But you should change the second part like this:
listA = [9,-2,6,1,80,9,-2]
def findMinimum(l):
if len(l) == 1:
return l[0]
else:
return min(l[0], findMinimum(l[1:]))
findMinimum(listA)
Remember, recursive functions comes to make our codes simpler and easier.
A: The structure of your code is about right, but it has some mistakes. First, you should not be using listA inside of your function; listA is passed as an argument from the outside, and from within the function you should only refer to l. In the non-recursive case (where len(l) == 1), you should return l[0] (the minimum of a list with one element is that one element). Then, it is correct to call findMinimum inside your function again (that's the recursive call, as you know); however, what you probably want is to call it with the all the list l except the first element, that is, l[1:]. Then, you should compare the result minNumber to the first element of l; the idea is that you pick the smallest of l[0] and the minimum in l[1:]. Then you return the one you have chosen.
Additionally, you may want to consider the case when you get an empty list and throw an error; if you don't, you may get into an infinite recursion!
So a possible solution could be something like this:
listA = [9,-2,6,1,80,9,-2]
def findMinimum(l):
if len(l) == 0:
raise ValueError('Cannot find the minimum of an empty list.')
elif len(l) == 1:
return l[0]
else:
minNumber = findMinimum(l[1:])
min = l[0]
if minNumber < min:
min = minNumber
return min
findMinimum(listA)
|
stackoverflow
|
{
"language": "en",
"length": 361,
"provenance": "stackexchange_0000F.jsonl.gz:865361",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543950"
}
|
d365b5411c67e5882e6133e6c429000bf6abce22
|
Stackoverflow Stackexchange
Q: Kotlin repeatable @annotations don't work on jdk-8 I have declared a repeatable annotation @Parameter in kotlin as below:
@Repeatable
annotation class Parameter(val name: String);
but when I use it as below the compiler reports an Error:
Only annotations with SOURCE retention can be repeated on JVM version before 1.8
@Parameter("foo")
@Parameter("bar")
fun repeat() = 1;
I'm sure I'm working with jdk-8 in kotlin. and the option jvmTarget also is set to 1.8 for kotlin-1.1.2 gradle plugin.
Q: Why it doesn't works fine?
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileKotlin {
kotlinOptions{
jvmTarget = "1.8"
}
}
A: If I'm not mistaken, Kotlin compiler currently targets the JDK 1.6 class file format. This means that, on Java, it can't write multiple annotations to the class file.
While conceptually Kotlin supports multiple annotations, until there's proper 1.8 targeting, it can't do so because of the output restrictions.
|
Q: Kotlin repeatable @annotations don't work on jdk-8 I have declared a repeatable annotation @Parameter in kotlin as below:
@Repeatable
annotation class Parameter(val name: String);
but when I use it as below the compiler reports an Error:
Only annotations with SOURCE retention can be repeated on JVM version before 1.8
@Parameter("foo")
@Parameter("bar")
fun repeat() = 1;
I'm sure I'm working with jdk-8 in kotlin. and the option jvmTarget also is set to 1.8 for kotlin-1.1.2 gradle plugin.
Q: Why it doesn't works fine?
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileKotlin {
kotlinOptions{
jvmTarget = "1.8"
}
}
A: If I'm not mistaken, Kotlin compiler currently targets the JDK 1.6 class file format. This means that, on Java, it can't write multiple annotations to the class file.
While conceptually Kotlin supports multiple annotations, until there's proper 1.8 targeting, it can't do so because of the output restrictions.
A: It is currently not possible with Kotlin. There is a bug opened, please feel free to vote for it: https://youtrack.jetbrains.com/issue/KT-12794
A: Yes, you can. Just use the "long form" of the annotation.
Follow the end of this thread:
https://discuss.kotlinlang.org/t/issue-with-repeated-java-8-annotations/1667/11
A: Check the Kotlin compiler settings in IntelliJ. IntelliJ doesn't use gradle to build and run your project by default. It uses the IDEA Kotlin plugin.
|
stackoverflow
|
{
"language": "en",
"length": 212,
"provenance": "stackexchange_0000F.jsonl.gz:865371",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543978"
}
|
595a6d61dd9325094a0907540362a349ece2c88e
|
Stackoverflow Stackexchange
Q: Why explicitly declare classes special functions as "default" What is the difference between explicitly declaring classes special functions default.
class Myclass
{
public:
Myclass() = default;
virtual ~Myclass() = default;
Myclass(MyClass&&) = default;
Myclass& operator=(MyClass&&) = default;
Myclass(const MyClass&) = default;
Myclass& operator=(const MyClass&) = default;
};
MyClass{};
What is the difference between this 2 declarations?
Why explicitly specify the default behavioral functions as default??
A: Because under certain conditions the compiler might not add the constructors, destructor or operators even though you may want the compiler-generated defaults. Then by using the explicit default designator the compiler will do that anyway.
You can find out more in e.g. this class reference.
|
Q: Why explicitly declare classes special functions as "default" What is the difference between explicitly declaring classes special functions default.
class Myclass
{
public:
Myclass() = default;
virtual ~Myclass() = default;
Myclass(MyClass&&) = default;
Myclass& operator=(MyClass&&) = default;
Myclass(const MyClass&) = default;
Myclass& operator=(const MyClass&) = default;
};
MyClass{};
What is the difference between this 2 declarations?
Why explicitly specify the default behavioral functions as default??
A: Because under certain conditions the compiler might not add the constructors, destructor or operators even though you may want the compiler-generated defaults. Then by using the explicit default designator the compiler will do that anyway.
You can find out more in e.g. this class reference.
|
stackoverflow
|
{
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:865374",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44543986"
}
|
3afac626916669eaae8e2a2f4148de0532bc8cd6
|
Stackoverflow Stackexchange
Q: Disable indoor view of buildings in Google Street View Image API I'm using Google Street View Image API (not the Javascript API) to construct a URL which returns a street view image. The problem is, that sometimes it returns an image of the inside of the building, rather than the image of the building at the provided address. I know this can be disabled using the Javascript API, but can it be disabled via a URL parameter in the Image API?
A: You should add &source=outdoor as an url parameter
|
Q: Disable indoor view of buildings in Google Street View Image API I'm using Google Street View Image API (not the Javascript API) to construct a URL which returns a street view image. The problem is, that sometimes it returns an image of the inside of the building, rather than the image of the building at the provided address. I know this can be disabled using the Javascript API, but can it be disabled via a URL parameter in the Image API?
A: You should add &source=outdoor as an url parameter
A: You can make a call to the Geocoding API in order to get the lat/lang coordinates to an address.
https://maps.googleapis.com/maps/api/streetview?size=400x400&address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&fov=90&heading=235&pitch=10&key=YOUR_API_KEY
Then the you use the results.geometry.location from the response to get the coordinates you need to create the Street View Image URL
The Geocoder will always give you a location outside at the entrance, other APIs like Directions API may give you the same indoor problem.
A: You can call the Google Street View Image Metadata API to get the copyright info of the image. If this is not equal to "© Google, Inc.", it's a real streetview image.
I know that it's not the best solution, but it works.
|
stackoverflow
|
{
"language": "en",
"length": 202,
"provenance": "stackexchange_0000F.jsonl.gz:865386",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544029"
}
|
ec2ffcf4f8ed08c78c14de07c6241283b09fbf53
|
Stackoverflow Stackexchange
Q: Can unused javascript functions slow down page performance? Trying to choose the right theme: I have a main javascript file with an amount of 500kb. In this file are many functions, which are not being used on the current site.
Beside the additional load on page load:
Can these unused functions slow down the performance?
Can unused functions require RAM or CPU usage on visitors end, even if they are not used, for example because they're storing variables?
A: Yes, because these functions are still being downloaded by the browser and stored in memory of the page in the browser.
But mind you, they probably won't have a big effect, so purging the javascript may not lead to a noticable increase, unless your users are visiting the site with a really slow internet connection or something.
|
Q: Can unused javascript functions slow down page performance? Trying to choose the right theme: I have a main javascript file with an amount of 500kb. In this file are many functions, which are not being used on the current site.
Beside the additional load on page load:
Can these unused functions slow down the performance?
Can unused functions require RAM or CPU usage on visitors end, even if they are not used, for example because they're storing variables?
A: Yes, because these functions are still being downloaded by the browser and stored in memory of the page in the browser.
But mind you, they probably won't have a big effect, so purging the javascript may not lead to a noticable increase, unless your users are visiting the site with a really slow internet connection or something.
A:
Beside the additional load on page load: Can these unused functions slow down the performance?
Beside the additional load on page load? Only if the user is on an extremely memory-starved device. 500k of JavaScript code doesn't translate into much memory usage for the parsed result at all, the effect of it sitting in memory will, in all but the most unusual environments, be effectively zero.
But two points on the thing you were leaving out with that "beside" comment:
*
*Downloading the unnecessary code; could have a noticeable effect on a slower connection.
*Parsing (and possibly compiling) the unnecessary code; could have a very small effect on the apparent page load, on a device with a really slow processor or a browser with a really slow JavaScript engine.
But effectively, in the vast majority of environments, just having the extra functions around won't cause a noticeable effect at all. It's primarily downloading the unnecessary program text that will be noticeable.
|
stackoverflow
|
{
"language": "en",
"length": 299,
"provenance": "stackexchange_0000F.jsonl.gz:865402",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544064"
}
|
771c3a50a00a4cb05d1d99e1a309dc4e17106e54
|
Stackoverflow Stackexchange
Q: Pycharm Python console not printing the output I have a function that I call from Pycharm python console, but no output is shown.
In[2]: def problem1_6():
...: for i in range(1, 101, 2):
...: print(i, end = ' ')
...:
In[3]: problem1_6()
In[4]:
On the other hand, like this, it prints but in the wrong order
In[7]: def problem1_6():
...: print('hello')
...:
...: for i in range(1, 101, 2):
...: print(i, end = ' ')
...:
In[8]: problem1_6()
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 hello
As a third option, as a suggestion of @DavidS,
In[18]: import sys
...:
...: def problem1_6():
...: for i in range(1, 101, 2):
...: sys.stdout.write(str(i) + ' ')
...:
In[19]: problem1_6()
In[20]:
It still doesn't print.
A: This will working:
def problem1_6():
for i in range(1, 101, 2):
sys.stdout.write(str(i) + ' ')
sys.stdout.flush()
or:
def problem1_6():
for i in range(1, 101, 2):
print(i, end=' ', flush=True)
|
Q: Pycharm Python console not printing the output I have a function that I call from Pycharm python console, but no output is shown.
In[2]: def problem1_6():
...: for i in range(1, 101, 2):
...: print(i, end = ' ')
...:
In[3]: problem1_6()
In[4]:
On the other hand, like this, it prints but in the wrong order
In[7]: def problem1_6():
...: print('hello')
...:
...: for i in range(1, 101, 2):
...: print(i, end = ' ')
...:
In[8]: problem1_6()
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 hello
As a third option, as a suggestion of @DavidS,
In[18]: import sys
...:
...: def problem1_6():
...: for i in range(1, 101, 2):
...: sys.stdout.write(str(i) + ' ')
...:
In[19]: problem1_6()
In[20]:
It still doesn't print.
A: This will working:
def problem1_6():
for i in range(1, 101, 2):
sys.stdout.write(str(i) + ' ')
sys.stdout.flush()
or:
def problem1_6():
for i in range(1, 101, 2):
print(i, end=' ', flush=True)
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:865410",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544093"
}
|
84259d83d7e2a740d9b250e45038d453106365da
|
Stackoverflow Stackexchange
Q: create a new class do not appear right click menu eclipse I have a really annoying problem in Eclipse java neon, when i do a right click on a package in order to create a new class or in any area, Eclipse don't show me proposition like class, package or even project but only a incomplete menu unusable
I have already launch eclipse with -clean but not help
An illustration of the problem, I also notice that Eclipse is in... debug mode ? (upper left corner)
A: *
*Select menu Window -> Perspective -> Customize Perspective....
*Select Menu Visibility tab.
*Select tree node File -> New.
*Toggle check box of menu items as you like.
|
Q: create a new class do not appear right click menu eclipse I have a really annoying problem in Eclipse java neon, when i do a right click on a package in order to create a new class or in any area, Eclipse don't show me proposition like class, package or even project but only a incomplete menu unusable
I have already launch eclipse with -clean but not help
An illustration of the problem, I also notice that Eclipse is in... debug mode ? (upper left corner)
A: *
*Select menu Window -> Perspective -> Customize Perspective....
*Select Menu Visibility tab.
*Select tree node File -> New.
*Toggle check box of menu items as you like.
A: You are in the Debug Perspective (see What is a Perspective?). You can switch back to the Java or JEE perspective (where most development activities are typically performed) by using the perspective switcher toolbar in the upper-right corner of the Eclipse window.
I suggest you learn about the use of Perspectives in Eclipse.
A: Try
File --> New --> (Then choose what you want)
Specify the Source folder and package.
A: When I don't find something in the menu there, I just click on Other... and search for whatever I want to create. Regardless of the Perspective.
Of course it could more practical to customize the Perspective to include the frequently used items as mentioned.
|
stackoverflow
|
{
"language": "en",
"length": 232,
"provenance": "stackexchange_0000F.jsonl.gz:865478",
"question_score": "18",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544312"
}
|
669c3e40486a7b195f0d0f65502ddb6083e08d42
|
Stackoverflow Stackexchange
Q: I am not able to add an image in email body using python , I am able to add a picture as a attachment but i want a code to add image in mailbody Right now I am using below code:
import win32com.client as win32
outlook = win32.Dispatch('outlook.application')
mail = outlook.CreateItem(0)
mail.To = 'to address'
mail.Subject = 'Message subject'
mail.Body = 'Message body'
mail.HTMLBody = '<h2>HTML Message body</h2>'# this field is optional
**mail.Attachments.Add('C:\Users\MA299445\Downloads\screenshot.png')**
mail.Send()
I am able to attach a picture but I want to paste this picture in e-mail body.
Thanks in advance
A: You can use <img> HTML tag:
encoded_image = base64.b64encode(image_file.getvalue()).decode("utf-8")
html = '<img src="data:image/png;base64,%s"/>' % encoded_image
And you can put the tag inside your HTML content.
Don't forget to import required modules:
import base64
|
Q: I am not able to add an image in email body using python , I am able to add a picture as a attachment but i want a code to add image in mailbody Right now I am using below code:
import win32com.client as win32
outlook = win32.Dispatch('outlook.application')
mail = outlook.CreateItem(0)
mail.To = 'to address'
mail.Subject = 'Message subject'
mail.Body = 'Message body'
mail.HTMLBody = '<h2>HTML Message body</h2>'# this field is optional
**mail.Attachments.Add('C:\Users\MA299445\Downloads\screenshot.png')**
mail.Send()
I am able to attach a picture but I want to paste this picture in e-mail body.
Thanks in advance
A: You can use <img> HTML tag:
encoded_image = base64.b64encode(image_file.getvalue()).decode("utf-8")
html = '<img src="data:image/png;base64,%s"/>' % encoded_image
And you can put the tag inside your HTML content.
Don't forget to import required modules:
import base64
A: Create an attachment and set the PR_ATTACH_CONTENT_ID property (DASL name "http://schemas.microsoft.com/mapi/proptag/0x3712001F") using Attachment.PropertyAccessor.SetProperty.
Your HTML body (MailItem.HTMLBody property) would then need to reference that image attachment through the cid:
<img src="cid:xyz"/>
where xyz is the value of the PR_ATTACH_CONTENT_ID property.
Look at an existing message with OutlookSpy (I am its author) - click IMessage button, go to the GetAttachmentTable tab, double click on an attachment to see its properties.
attachment = mail.Attachments.Add("C:\Users\MA299445\Downloads\screenshot.png")
attachment.PropertyAccessor.SetProperty("http://schemas.microsoft.com/mapi/proptag/0x3712001F", "MyId1")
mail.HTMLBody = "<html><body>Test image <img src=""cid:MyId1""></body></html>"
|
stackoverflow
|
{
"language": "en",
"length": 210,
"provenance": "stackexchange_0000F.jsonl.gz:865500",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544369"
}
|
1f2c5ebbd21627e06b1028137b33717aa4bedee8
|
Stackoverflow Stackexchange
Q: Redundant use of distinct in group by? I'm reviewing some SQL queries in SAS and I encountered the following query structure:
SELECT distinct A, B, Sum(C) FROM Table1 GROUP BY A, B;
I would like to know if it's strictly equivalent to:
SELECT A, B, Sum(C) FROM Table1 GROUP BY A, B;
Or if I'm missing a nuance, in the output or the way the computation is handled
A: The two queries are equivalent.
Generally,
SELECT DISTINCT a, b, c
FROM <something>
is equivalent to
SELECT a, b, c
FROM <something>
GROUP BY a, b, c
In your case, <something> happens to be a result of GROUP BY query, which has distinct columns A and B. This is enough to ensure that triples A, B, SUM(C) are going to be unique as well.
|
Q: Redundant use of distinct in group by? I'm reviewing some SQL queries in SAS and I encountered the following query structure:
SELECT distinct A, B, Sum(C) FROM Table1 GROUP BY A, B;
I would like to know if it's strictly equivalent to:
SELECT A, B, Sum(C) FROM Table1 GROUP BY A, B;
Or if I'm missing a nuance, in the output or the way the computation is handled
A: The two queries are equivalent.
Generally,
SELECT DISTINCT a, b, c
FROM <something>
is equivalent to
SELECT a, b, c
FROM <something>
GROUP BY a, b, c
In your case, <something> happens to be a result of GROUP BY query, which has distinct columns A and B. This is enough to ensure that triples A, B, SUM(C) are going to be unique as well.
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:865555",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544535"
}
|
e79cb2cefce9746ca07f9e823f1abbdce059e924
|
Stackoverflow Stackexchange
Q: 64 bit Random number value in hex gives 16 characters sometimes 15 characters why? I have scenario in the application where I need to generate an ID which should be Random 64 bit value in hex representation,
What I have done so far,
Random randomLong = new Random();
long m = randomLong.nextLong();
String uid = Long.toHexString(m);
The o/p could be like 43c45c243f90326a or 82cf8e3863102f3a etc.
But not every time it gives 16 character, but 15 characters instead I don't get why :(
What is the most efficient way to get Random 64 bit value in hex representation which contains 16 characters
A: Use String.format()
long value=123L;
String uid = String.format("%016x", value);
// 000000000000007b
A word of explanation:
Each hex digit represents 4-bits. A 64-bit long can be represented by 16 (64/4) hexadecimal characters. To include the leading zeros, you want 16 hex digits. So your format specifier is %016x. Basically, %x for hex modified by inserting 016 to left-pad with zeros to achieve a minimum width of 16 characters.
|
Q: 64 bit Random number value in hex gives 16 characters sometimes 15 characters why? I have scenario in the application where I need to generate an ID which should be Random 64 bit value in hex representation,
What I have done so far,
Random randomLong = new Random();
long m = randomLong.nextLong();
String uid = Long.toHexString(m);
The o/p could be like 43c45c243f90326a or 82cf8e3863102f3a etc.
But not every time it gives 16 character, but 15 characters instead I don't get why :(
What is the most efficient way to get Random 64 bit value in hex representation which contains 16 characters
A: Use String.format()
long value=123L;
String uid = String.format("%016x", value);
// 000000000000007b
A word of explanation:
Each hex digit represents 4-bits. A 64-bit long can be represented by 16 (64/4) hexadecimal characters. To include the leading zeros, you want 16 hex digits. So your format specifier is %016x. Basically, %x for hex modified by inserting 016 to left-pad with zeros to achieve a minimum width of 16 characters.
A: this is because the method toHExString is not adding leading zeros to the random number you are converting, if the number can be represented with only one char as hex, then one char is what you get
see this
String uid = Long.toHexString(-1L);
String uid2 = Long.toHexString(1L);
System.out.println(uid);
System.out.println(uid2);
the output is
ffffffffffffffff
1
-1 will require the 64 bits to be represented but 1 doesnt
edit:
if you always need 16 nibbles then format that string in order to add the missing zeros
String uid = String.format("%016x", Long.toHexString(1L));
A: String.format("%016x", uid);
This will add the leading 0's, or 'padding'
as if the hex number is smaller than filling all 16, it wont show 16 digits.
|
stackoverflow
|
{
"language": "en",
"length": 287,
"provenance": "stackexchange_0000F.jsonl.gz:865557",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544541"
}
|
0ad29157c2a0306109a8aeb81d0b72795c2e8ab3
|
Stackoverflow Stackexchange
Q: react-native-video can't display HLS stream on Android I'm using Android device to push a video stream with RTMP to SRS. In SRS I use HLS to deliver the stream.Then I use react-native-video to fetch the stream,but it is in white,it can't display the stream.
SRS logs show react-native-video had fetched the stream.The react-native-video onError logs
{ error: { extra: -22, what: 1 } }
However, when I put a video in SRS in advance,react-native-video can display the stream well.
I'm sure the pushed stream encoded in the right video/audio codec.
|
Q: react-native-video can't display HLS stream on Android I'm using Android device to push a video stream with RTMP to SRS. In SRS I use HLS to deliver the stream.Then I use react-native-video to fetch the stream,but it is in white,it can't display the stream.
SRS logs show react-native-video had fetched the stream.The react-native-video onError logs
{ error: { extra: -22, what: 1 } }
However, when I put a video in SRS in advance,react-native-video can display the stream well.
I'm sure the pushed stream encoded in the right video/audio codec.
|
stackoverflow
|
{
"language": "en",
"length": 91,
"provenance": "stackexchange_0000F.jsonl.gz:865564",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544566"
}
|
ba92a82a1d48791ff2d0a9e1e1a33850ee60dde1
|
Stackoverflow Stackexchange
Q: Remove row if any column contains a specific string I am trying to figure out the best approach in R to remove rows that contain a specific string, in my case 'no_data'.
I have data from an outside source that imputes na's with 'no_data'
an example is this:
time |speed |wheels
1:00 |30 |no_data
2:00 |no_data|18
no_data|no_data|no_data
3:00 |50 |18
I want to go through the data and remove each row containing this 'no_data' string in any column. I have had a lot of trouble figuring this out. I have tried an sapply, filter, grep and combinations of the three. I am by no means an r expert so it could just be me incorrectly using these. Any help would be appreciated.
A: You can read the data using na.strings = 'no_data' to set them as NA and then simply omit NAs (or take complete.cases), i.e. (Using @akrun's data set)
d1 <- read.table(text = 'time speed wheels
1 1:00 30 no_data
2 2:00 no_data 18
3 no_data no_data no_data
4 3:00 50 18', na.strings = 'no_data', h=TRUE)
d1[complete.cases(d1),]
# time speed wheels
#4 3:00 50 18
#OR
na.omit(d1)
# time speed wheels
#4 3:00 50 18
|
Q: Remove row if any column contains a specific string I am trying to figure out the best approach in R to remove rows that contain a specific string, in my case 'no_data'.
I have data from an outside source that imputes na's with 'no_data'
an example is this:
time |speed |wheels
1:00 |30 |no_data
2:00 |no_data|18
no_data|no_data|no_data
3:00 |50 |18
I want to go through the data and remove each row containing this 'no_data' string in any column. I have had a lot of trouble figuring this out. I have tried an sapply, filter, grep and combinations of the three. I am by no means an r expert so it could just be me incorrectly using these. Any help would be appreciated.
A: You can read the data using na.strings = 'no_data' to set them as NA and then simply omit NAs (or take complete.cases), i.e. (Using @akrun's data set)
d1 <- read.table(text = 'time speed wheels
1 1:00 30 no_data
2 2:00 no_data 18
3 no_data no_data no_data
4 3:00 50 18', na.strings = 'no_data', h=TRUE)
d1[complete.cases(d1),]
# time speed wheels
#4 3:00 50 18
#OR
na.omit(d1)
# time speed wheels
#4 3:00 50 18
A: edit update to the filter(if_all/if_any) syntax (dplyr vs. 1.0.10), formerly using across (now deprecated) and even before that filter_all or filter_any (superseded).
Here a dplyr option: (using Akrun's data)
library(dplyr)
df1 <- structure(list(time = c("1:00", "2:00", "no_data", "3:00"), speed = c("30", "no_data", "no_data", "50"), wheels = c("no_data", "18", "no_data", "18")), .Names = c("time", "speed", "wheels"), class = "data.frame", row.names = c(NA, -4L))
## with if_any
df1 %>% filter(if_any(everything(), ~ grepl("no_data", .)))
#> time speed wheels
#> 1 1:00 30 no_data
#> 2 2:00 no_data 18
#> 3 no_data no_data no_data
## or with if_all
df1 %>% filter(if_all(everything(), ~ !grepl("no_data", .)))
#> time speed wheels
#> 1 3:00 50 18
## to GET all rows that fulfil condition, use
df1 %>% filter(if_any(everything(), ~ grepl("no_data", .)))
#> time speed wheels
#> 1 1:00 30 no_data
#> 2 2:00 no_data 18
#> 3 no_data no_data no_data
A: akrun answer is quick, correct and simply as much is it can :)
however if you like to make your life more complex you can also do:
dat
time speed wheels
1 1:00 30 no_data
2 2:00 no_data 18
3 no_data no_data no_data
4 3:00 50 18
dat$new <- apply(dat[,1:3], 1, function(x) any(x %in% c("no_data")))
dat <- dat[!(dat$new==TRUE),]
dat$new <- NULL
dat
time speed wheels
4 3:00 50 18
A: We can use rowSums to create a logical vector and subset based on it
df1[rowSums(df1 == "no_data")==0, , drop = FALSE]
# time speed wheels
#4 3:00 50 18
data
df1 <- structure(list(time = c("1:00", "2:00", "no_data", "3:00"), speed = c("30",
"no_data", "no_data", "50"), wheels = c("no_data", "18", "no_data",
"18")), .Names = c("time", "speed", "wheels"), class = "data.frame",
row.names = c(NA, -4L))
|
stackoverflow
|
{
"language": "en",
"length": 474,
"provenance": "stackexchange_0000F.jsonl.gz:865571",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544596"
}
|
6cfba4f7e375dba3d69503142c866793cb961002
|
Stackoverflow Stackexchange
Q: Remove ToastNotification from ActionCenter I have a desktop application running in Windows 10 which creates ToastNotifications that are also being stored in the Action Center. I noticed, that when I reboot the computer the Notifications are still present in the Action Center so I wanted to remove them through my Application when they're not necessary anymore.
I wanted to use the ToastNotificationHistory Remove method for this.
My code looks like this:
public static void RemoveNotificationByTag(string toastTag)
{
ToastNotificationManager.History.Remove(toastTag, "TEST");
}
But this leads to this exception: System.Exception: 'Element not found. (Exception from HRESULT: 0x80070490)'
The notification I've been sending priorly has a Tag and a Group value.
I get the same exception when calling the RemoveGroup or GetHistory method. Basically it seems like I cannot call any method from the History class without getting the same exception
A: On Windows 10 it is necessary to provide the applicationId parameter to each of the methods. Also you must specify not only a toast tag, but its group as well.
Calling the method like this works:
ToastNotificationManager.History.Remove(toastTag, "TEST", appId);
|
Q: Remove ToastNotification from ActionCenter I have a desktop application running in Windows 10 which creates ToastNotifications that are also being stored in the Action Center. I noticed, that when I reboot the computer the Notifications are still present in the Action Center so I wanted to remove them through my Application when they're not necessary anymore.
I wanted to use the ToastNotificationHistory Remove method for this.
My code looks like this:
public static void RemoveNotificationByTag(string toastTag)
{
ToastNotificationManager.History.Remove(toastTag, "TEST");
}
But this leads to this exception: System.Exception: 'Element not found. (Exception from HRESULT: 0x80070490)'
The notification I've been sending priorly has a Tag and a Group value.
I get the same exception when calling the RemoveGroup or GetHistory method. Basically it seems like I cannot call any method from the History class without getting the same exception
A: On Windows 10 it is necessary to provide the applicationId parameter to each of the methods. Also you must specify not only a toast tag, but its group as well.
Calling the method like this works:
ToastNotificationManager.History.Remove(toastTag, "TEST", appId);
|
stackoverflow
|
{
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:865582",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544624"
}
|
483359f190682ad59141ad72e039414dc2e01e6b
|
Stackoverflow Stackexchange
Q: ionic using get previous page name I am using ionic 2.
I need get previous page name.
here is my code.
@ViewChild(Nav) nav:Nav
constructor() {
this.nav_app.viewDidEnter.subscribe(
view => console.log("Current opened view is : " + view.name);
)
}
still i am getting
Current opened view is : t
How can i get previous page name.
Kindly advice me,
Thanks
A: In ionic +2 you can simply use:
this.navCtrl.last().name
Here is a simple example to log the name
constructor(public navCtrl:NavController){
console.log("Previous Page is called = " + this.navCtrl.last().name);
}
|
Q: ionic using get previous page name I am using ionic 2.
I need get previous page name.
here is my code.
@ViewChild(Nav) nav:Nav
constructor() {
this.nav_app.viewDidEnter.subscribe(
view => console.log("Current opened view is : " + view.name);
)
}
still i am getting
Current opened view is : t
How can i get previous page name.
Kindly advice me,
Thanks
A: In ionic +2 you can simply use:
this.navCtrl.last().name
Here is a simple example to log the name
constructor(public navCtrl:NavController){
console.log("Previous Page is called = " + this.navCtrl.last().name);
}
A: You can try
import { Component, ViewChild } from '@angular/core';
import { NavController } from 'ionic-angular';
export class MyApp {
constructor(public navCtrl:NavController){
var val=this.navCtrl.last();
console.log("VAL");
console.log(val);
}
}
A: if you want a history/previous page name in ionic you can use this.
this.navCtrl.getPrevious().name;
or
this.nav.getPrevious().name;
|
stackoverflow
|
{
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:865586",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544646"
}
|
afe7dbc68f92030f383e756651806f1cbdeda84f
|
Stackoverflow Stackexchange
Q: SourceTree Error: 'git status' failed with code 128: error: inflate: data stream error (incorrect header check)
$ git fsck --full
error: inflate: data stream error (incorrect header check)
error: unable to unpack 06c147f2771e280dfb4758c9a83b94346993d172 header
error: inflate: data stream error (incorrect header check)
fatal: loose object 06c147f2771e280dfb4758c9a83b94346993d172 (stored in .git/objects/06/c147f2771e280dfb4758c9a83b94346993d172) is corrupt
also, try this, and remove the all corrupted objects. after removing objects i write this command,
$ git reset --hard
error: unable to read sha1 file of 3X/3X.Core/Resources/Resource.Designer.cs (d46f74436ae02ec61a659a8a487aee5747e2feda)
error: unable to read sha1 file of 3X/3X.Core/Resources/Resource.resx (63342162564404ccae4917489dc78ebb65075f8a)
error: unable to read sha1 file of 3X/3X.Web/Views/Job/ConfirmationAdvice.cshtml (ff39e42f5cf0e0703bd9dfe84a4b746ff91eea40)
error: unable to read sha1 file of 3X/3X.Web/Views/Job/Create.cshtml (3a97827faac6c62fd24f347dd0b0951c27c03751)
error: unable to read sha1 file of 3X/3X.Web/Views/Job/DataEntry.cshtml (89f381bafaeff53eeaf64a26d8c9608e9e86b6a1)
error: unable to read sha1 file of 3X/3X.Web/wwwroot/js/viewjs/Job/create.js (1b62c618c31add2ca28d107c1a49604492409ecf)
fatal: Could not reset index file to revision 'HEAD'.
and got above error
A: Try with removing your index file which is in .git folder
Windows System:
del .git\index
git reset
Linux System:
rm -f .git/index
git reset
And if you have deleted index file manually then you need to do
git reset
|
Q: SourceTree Error: 'git status' failed with code 128: error: inflate: data stream error (incorrect header check)
$ git fsck --full
error: inflate: data stream error (incorrect header check)
error: unable to unpack 06c147f2771e280dfb4758c9a83b94346993d172 header
error: inflate: data stream error (incorrect header check)
fatal: loose object 06c147f2771e280dfb4758c9a83b94346993d172 (stored in .git/objects/06/c147f2771e280dfb4758c9a83b94346993d172) is corrupt
also, try this, and remove the all corrupted objects. after removing objects i write this command,
$ git reset --hard
error: unable to read sha1 file of 3X/3X.Core/Resources/Resource.Designer.cs (d46f74436ae02ec61a659a8a487aee5747e2feda)
error: unable to read sha1 file of 3X/3X.Core/Resources/Resource.resx (63342162564404ccae4917489dc78ebb65075f8a)
error: unable to read sha1 file of 3X/3X.Web/Views/Job/ConfirmationAdvice.cshtml (ff39e42f5cf0e0703bd9dfe84a4b746ff91eea40)
error: unable to read sha1 file of 3X/3X.Web/Views/Job/Create.cshtml (3a97827faac6c62fd24f347dd0b0951c27c03751)
error: unable to read sha1 file of 3X/3X.Web/Views/Job/DataEntry.cshtml (89f381bafaeff53eeaf64a26d8c9608e9e86b6a1)
error: unable to read sha1 file of 3X/3X.Web/wwwroot/js/viewjs/Job/create.js (1b62c618c31add2ca28d107c1a49604492409ecf)
fatal: Could not reset index file to revision 'HEAD'.
and got above error
A: Try with removing your index file which is in .git folder
Windows System:
del .git\index
git reset
Linux System:
rm -f .git/index
git reset
And if you have deleted index file manually then you need to do
git reset
A: It should be corrupted. Try removing this .git/objects/06/c147f2771e280dfb4758c9a83b94346993d172
If you are getting same error with another object try removing all those and fetch again.
A: Here is the solution that worked for me
i opened my server and went to .git/ and downloaded index file from there then deleted the local .git/index file manually and replace the file with the downloaded index file
A: Tried all answers and none worked with me.
Finally figured out that the problem is that you have probably configured your Source Tree repo to be located at repo_local_url/.git
The solution is to make it point at repo_local_url instead
A: I got this issue because I accidentally removed the git/objects folder. When I restored it from the Recycle Bin it worked again.
|
stackoverflow
|
{
"language": "en",
"length": 306,
"provenance": "stackexchange_0000F.jsonl.gz:865598",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544688"
}
|
b9ecbd57f5aceed58f76e0e126b3553b7cc6587b
|
Stackoverflow Stackexchange
Q: Python PPTX workaround function for rotating chart data labels I intend to create the following chart using Python PPTX.
Below code achieve the color setting, font size and number format. However, I am not yet able to rotate the data label, as I believe this API is not yet available in python-pptx 0.6.5
lbl = plot.data_labels
lbl.font.size = config["DATA_LABEL_FONT_SIZE"]
lbl.font.color.rgb = config["DATA_LABEL_FONT_COLOR"]
lbl.number_format = config["DATA_LABEL_NUMBER_FORMAT"]
lbl.position = config["DATA_LABEL_POSITION"]
To get started, I have created two minimal slides before and after rotating, and use opc-diag tool to find the diff.
<a:bodyPr rot="-5400000" spcFirstLastPara="1" vertOverflow="ellipsis"
vert="horz" wrap="square" lIns="38100" tIns="19050" rIns="38100"
bIns="19050" anchor="ctr" anchorCtr="1">\n
<a:spAutoFit/>\n </a:bodyPr>\n
I believe I need to add rot="-5400000" XML element to lbl (plot.data_labels), but not clear on how to achieve this. I have used dir(), ._element and .xml on the chart and its children but not able to find <a:bodyPr> tag.
A: I tried below and it works.
if config["DATA_LABEL_VERTICAL"]:
txPr = lbl._element.get_or_add_txPr()
txPr.bodyPr.set('rot','-5400000')
|
Q: Python PPTX workaround function for rotating chart data labels I intend to create the following chart using Python PPTX.
Below code achieve the color setting, font size and number format. However, I am not yet able to rotate the data label, as I believe this API is not yet available in python-pptx 0.6.5
lbl = plot.data_labels
lbl.font.size = config["DATA_LABEL_FONT_SIZE"]
lbl.font.color.rgb = config["DATA_LABEL_FONT_COLOR"]
lbl.number_format = config["DATA_LABEL_NUMBER_FORMAT"]
lbl.position = config["DATA_LABEL_POSITION"]
To get started, I have created two minimal slides before and after rotating, and use opc-diag tool to find the diff.
<a:bodyPr rot="-5400000" spcFirstLastPara="1" vertOverflow="ellipsis"
vert="horz" wrap="square" lIns="38100" tIns="19050" rIns="38100"
bIns="19050" anchor="ctr" anchorCtr="1">\n
<a:spAutoFit/>\n </a:bodyPr>\n
I believe I need to add rot="-5400000" XML element to lbl (plot.data_labels), but not clear on how to achieve this. I have used dir(), ._element and .xml on the chart and its children but not able to find <a:bodyPr> tag.
A: I tried below and it works.
if config["DATA_LABEL_VERTICAL"]:
txPr = lbl._element.get_or_add_txPr()
txPr.bodyPr.set('rot','-5400000')
|
stackoverflow
|
{
"language": "en",
"length": 157,
"provenance": "stackexchange_0000F.jsonl.gz:865602",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544701"
}
|
ae745bf6069e7b78147c81eb5c5ea15e66c89427
|
Stackoverflow Stackexchange
Q: Inherit from collections.Counter: 'fromkeys' is abstract I have a python class that inherit from collections.Counter:
class Analyzer(collections.Counter):
pass
When I use pylint on this code, its answer is:
W: Method 'fromkeys' is abstract in class 'Counter' but is not overridden (abstract-method)
I checked the implementation of collections.Counter on my machine, and effectively, this method is not implemented (and a comment helps to understand why):
class Counter(dict):
...
@classmethod
def fromkeys(cls, iterable, v=None):
# There is no equivalent method for counters because setting v=1
# means that no element can have a count greater than one.
raise NotImplementedError(
'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
However, I don't really know how to implement this method, if Counter itself does not…
What is the way to solve this warning in this situation?
A: This question should answer some of the questions here. Basically, pylint checks for NotImplementedError exceptions being raised to determine whether a method is abstract (a false positive in this case). Adding the comment #pylint: disable=W0223 will disable this check.
A similar issue is also raised in this question.
|
Q: Inherit from collections.Counter: 'fromkeys' is abstract I have a python class that inherit from collections.Counter:
class Analyzer(collections.Counter):
pass
When I use pylint on this code, its answer is:
W: Method 'fromkeys' is abstract in class 'Counter' but is not overridden (abstract-method)
I checked the implementation of collections.Counter on my machine, and effectively, this method is not implemented (and a comment helps to understand why):
class Counter(dict):
...
@classmethod
def fromkeys(cls, iterable, v=None):
# There is no equivalent method for counters because setting v=1
# means that no element can have a count greater than one.
raise NotImplementedError(
'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
However, I don't really know how to implement this method, if Counter itself does not…
What is the way to solve this warning in this situation?
A: This question should answer some of the questions here. Basically, pylint checks for NotImplementedError exceptions being raised to determine whether a method is abstract (a false positive in this case). Adding the comment #pylint: disable=W0223 will disable this check.
A similar issue is also raised in this question.
A: There is two separate way of thinking.
*
*Consider Counter as abstract (as pylint does, as Jared explained). Then, the class Analyzer must implement fromkeys or also be abstract. But then, one should not be able to instanciate Counter.
*Consider Counter as concrete, even if you cannot use its fromkeys method. Then, pylint's warning must be disabled (as it is wrong in this case, see Jared's answer to kown how), and the class Analyzer is also concrete and does not need to implement this method.
|
stackoverflow
|
{
"language": "en",
"length": 265,
"provenance": "stackexchange_0000F.jsonl.gz:865632",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544806"
}
|
227564b88fcb334acf523ec668a1c39db6a66ef7
|
Stackoverflow Stackexchange
Q: Deprecated FacebookSdk method throws RuntimeException I have FacebookSdk.sdkInitialize(getApplicationContext()) where sdkInitialize() is displayed as deprecated. According to this article we can just delete that line. But then I get following error for the line after AppEventsLogger.activateApp(this) :
AndroidRuntime: FATAL EXCEPTION: main Process: com.daimler.moovel.android:auth, PID: 4011 java.lang.RuntimeException: Unable to create application com.daimler.moovel.android.DebugApplication: The Facebook sdk must be initialized before calling activateApp at android.app.ActivityThread.handleBindApplication(ActivityThread.java:5879) at android.app.ActivityThread.-wrap3(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1699)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6682)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1520)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1410)
Caused by: The Facebook sdk must be initialized before calling activateApp
at com.facebook.appevents.AppEventsLogger.activateApp(AppEventsLogger.java:226)
at com.facebook.appevents.AppEventsLogger.activateApp(AppEventsLogger.java:208)
So what am I missing?
A: No need of AppEventsLogger.activateApp(this); now it is not required if you have set up facebook_id in manifest.xml
u just have to add following in Application tag in manifest.xml
<meta-data
android:name="com.facebook.sdk.ApplicationId"
android:value="@string/facebook_app_id" />
where facebook_app_id is defined in string.xml
|
Q: Deprecated FacebookSdk method throws RuntimeException I have FacebookSdk.sdkInitialize(getApplicationContext()) where sdkInitialize() is displayed as deprecated. According to this article we can just delete that line. But then I get following error for the line after AppEventsLogger.activateApp(this) :
AndroidRuntime: FATAL EXCEPTION: main Process: com.daimler.moovel.android:auth, PID: 4011 java.lang.RuntimeException: Unable to create application com.daimler.moovel.android.DebugApplication: The Facebook sdk must be initialized before calling activateApp at android.app.ActivityThread.handleBindApplication(ActivityThread.java:5879) at android.app.ActivityThread.-wrap3(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1699)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6682)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1520)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1410)
Caused by: The Facebook sdk must be initialized before calling activateApp
at com.facebook.appevents.AppEventsLogger.activateApp(AppEventsLogger.java:226)
at com.facebook.appevents.AppEventsLogger.activateApp(AppEventsLogger.java:208)
So what am I missing?
A: No need of AppEventsLogger.activateApp(this); now it is not required if you have set up facebook_id in manifest.xml
u just have to add following in Application tag in manifest.xml
<meta-data
android:name="com.facebook.sdk.ApplicationId"
android:value="@string/facebook_app_id" />
where facebook_app_id is defined in string.xml
A: That is because you upgraded your Facebook SDK and you are trying to use implemention of AppEventsLogger providinig this as Context:
AppEventsLogger.activateApp(this);
and that is replaced from SDK 4.19 and above with:
AppEventsLogger.activateApp(getApplication());
Documentation about this says:
Notifies the events system that the app has launched and activate and deactivate events should start being logged automatically. This should be called from the OnCreate method of you application.
That have logic if Facebook SDK now is auto initialized on Application start.
Try that I hope this will solve your problem.
|
stackoverflow
|
{
"language": "en",
"length": 228,
"provenance": "stackexchange_0000F.jsonl.gz:865652",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544866"
}
|
3d67bf0ed42cfd0b45b56185dea3cc1475f5abc0
|
Stackoverflow Stackexchange
Q: Read all lines from FileChannel to Stream of strings For my specific task, I need to read the data from FileChannel to a Stream (or Collection) of String's.
In a regular NIO for a Path we can use a convenient method Files.lines(...) which returns a Stream<String>. I need to get a same result, but from a FileChannel instead of Path:
public static Stream<String> lines(final FileChannel channel) {
//...
}
Any ideas how to do that?
A: I assume you want the channel to be closed when the returned Stream is closed, so the simplest approach would be
public static Stream<String> lines(FileChannel channel) {
BufferedReader br = new BufferedReader(Channels.newReader(channel, "UTF-8"));
return br.lines().onClose(() -> {
try { br.close(); }
catch (IOException ex) { throw new UncheckedIOException(ex); }
});
}
It doesn’t actually require a FileChannel as input, a ReadableByteChannel is sufficient.
Note that this also belongs to “regular NIO”; java.nio.file is sometimes referred to as “NIO.2”.
|
Q: Read all lines from FileChannel to Stream of strings For my specific task, I need to read the data from FileChannel to a Stream (or Collection) of String's.
In a regular NIO for a Path we can use a convenient method Files.lines(...) which returns a Stream<String>. I need to get a same result, but from a FileChannel instead of Path:
public static Stream<String> lines(final FileChannel channel) {
//...
}
Any ideas how to do that?
A: I assume you want the channel to be closed when the returned Stream is closed, so the simplest approach would be
public static Stream<String> lines(FileChannel channel) {
BufferedReader br = new BufferedReader(Channels.newReader(channel, "UTF-8"));
return br.lines().onClose(() -> {
try { br.close(); }
catch (IOException ex) { throw new UncheckedIOException(ex); }
});
}
It doesn’t actually require a FileChannel as input, a ReadableByteChannel is sufficient.
Note that this also belongs to “regular NIO”; java.nio.file is sometimes referred to as “NIO.2”.
|
stackoverflow
|
{
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:865679",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544945"
}
|
ec135cb93997928eaba6c5c056418d2b47d113d2
|
Stackoverflow Stackexchange
Q: How to access a method in controller in laravel scheduler? I need to access a method of my UserController in laravel scheduler function,
protected function schedule(Schedule $schedule)
{
$schedule->command('foo')
->hourly();
}
Is it possible?
A: try his:
$schedule->call('Full\Namespace\YourController@method')
->hourly();
|
Q: How to access a method in controller in laravel scheduler? I need to access a method of my UserController in laravel scheduler function,
protected function schedule(Schedule $schedule)
{
$schedule->command('foo')
->hourly();
}
Is it possible?
A: try his:
$schedule->call('Full\Namespace\YourController@method')
->hourly();
|
stackoverflow
|
{
"language": "en",
"length": 40,
"provenance": "stackexchange_0000F.jsonl.gz:865689",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44544979"
}
|
63062f2a8c830f257950e78794752dc400471e70
|
Stackoverflow Stackexchange
Q: docker add "requires at least one argument" error I have a folder which contains all the necessary components for an app, which I want to make a container of. I have everything set up so far, with the directory /home/user/Documents/App in the Dockerfile under the ADD heading. Then when Idocker build . in the App directory I get this
ADD /home/user/Documents/App
ADD requires at least one argument
I realize that this is probably a simple fix but I am new to this so any help would be greatly appreciated. Thank you
FROM alpine
ADD </home/user/Documents/App> </home/user/Documents/DockerApp>
WORKDIR /code
RUN pip install -r requirements.txt
EXPPOSE 8080
CMD ["python", "app.py"]
A: You need a source and destination for the ADD command. The source here is the app folder path. The destination should be where you want the dockerfile is run.
Try this I think it might work
|
Q: docker add "requires at least one argument" error I have a folder which contains all the necessary components for an app, which I want to make a container of. I have everything set up so far, with the directory /home/user/Documents/App in the Dockerfile under the ADD heading. Then when Idocker build . in the App directory I get this
ADD /home/user/Documents/App
ADD requires at least one argument
I realize that this is probably a simple fix but I am new to this so any help would be greatly appreciated. Thank you
FROM alpine
ADD </home/user/Documents/App> </home/user/Documents/DockerApp>
WORKDIR /code
RUN pip install -r requirements.txt
EXPPOSE 8080
CMD ["python", "app.py"]
A: You need a source and destination for the ADD command. The source here is the app folder path. The destination should be where you want the dockerfile is run.
Try this I think it might work
A: ADD defined in Dockerfile has the following structure
ADD sourceJarName destinationJarName
e.g.
ADD target/spring-boot-rest-docker-example-0.0.1-SNAPSHOT.jar app.jar
change your ADD likewise and try it will work
|
stackoverflow
|
{
"language": "en",
"length": 171,
"provenance": "stackexchange_0000F.jsonl.gz:865699",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545007"
}
|
c2933d4ac188fc50cdc62fd17d2b6e383714a4d9
|
Stackoverflow Stackexchange
Q: Basic FlatList code throws Warning - React Native FlatList does not seem to be working. I get this warning.
VirtualizedList: missing keys for items, make sure to specify a key property on each item or provide a custom keyExtractor.
Code:
<FlatList
data={[{name: 'a'}, {name: 'b'}]}
renderItem={
(item) => <Text key={Math.random().toString()}>{item.name}</Text>
}
key={Math.random().toString()} />
A: Have an 'id' in your data
const data = [
{
name: 'a',
id: 1
},
{
name: 'b',
id: 2
}];
<FlatList
data={data}
renderItem={
(item) => <Text>{item.name}</Text>
}
keyExtractor={item => item.id}
/>
Hope this helps !!!
|
Q: Basic FlatList code throws Warning - React Native FlatList does not seem to be working. I get this warning.
VirtualizedList: missing keys for items, make sure to specify a key property on each item or provide a custom keyExtractor.
Code:
<FlatList
data={[{name: 'a'}, {name: 'b'}]}
renderItem={
(item) => <Text key={Math.random().toString()}>{item.name}</Text>
}
key={Math.random().toString()} />
A: Have an 'id' in your data
const data = [
{
name: 'a',
id: 1
},
{
name: 'b',
id: 2
}];
<FlatList
data={data}
renderItem={
(item) => <Text>{item.name}</Text>
}
keyExtractor={item => item.id}
/>
Hope this helps !!!
A: You don't need to use keyExtractor. The React Native docs are a little unclear but the key property should go in each element of the data array rather than in the rendered child component. So rather than
<FlatList
data={[{id: 'a'}, {id: 'b'}]}
renderItem={({item}) => <View key={item.id} />}
/>
// React will give you a warning about there being no key prop
which is what you'd expect, you just need to put a key field in each element of the data array:
<FlatList
data={[{key: 'a'}, {key: 'b'}]}
renderItem={({item}) => <View />}
/>
// React is happy!
And definitely don't put a random string as the key.
A: Simply do this:
<FlatList
data={[{name: 'a'}, {name: 'b'}]}
renderItem={
({item}) => <Text>{item.name}</Text>
}
keyExtractor={(item, index) => index.toString()}
/>
Source: here
A: This did not give any warning (transforming the index to a string):
<FlatList
data={[{name: 'a'}, {name: 'b'}]}
keyExtractor={(item, index) => index+"" }
renderItem={
(item) => <Text>{item.name}</Text>
}
/>
A: A simple solution is to just give each entry a unique key before rendering with FlatList, since that's what the underlying VirtualizedList needs to track each entry.
users.forEach((user, i) => {
user.key = i + 1;
});
The warning does advice doing this first, or provide a custom key extractor.
A: this code work for me :
const content = [
{
name: 'Marta',
content: 'Payday in November: Rp. 987.654.321',
},]
<FlatList
data= {content}
renderItem = { ({ item }) => (
<View style={{ flexDirection: 'column', justifyContent: 'center' }}>
<Text style={{ fontSize: 20, fontWeight: '300', color: '#000000' }}>{item.name}</Text>
<Text style={{ color: '#000000' }}>{item.content}</Text>
/>
)}
keyExtractor={(item,index) => item.content}
/>
A: This worked for me:
<FlatList
data={[{name: 'a'}, {name: 'b'}]}
keyExtractor={(item, index) => index.toString()}
/>
A: You can use
<FlatList
data={[]}
keyExtractor={(item, index) => index.toString()}
/>
NOTE : Using index.toString() i.e expected to be string.
A: in case your Data is not an object :
[in fact it is using each item index (in the array) as a key]
data: ['name1','name2'] //declared in constructor
<FlatList
data= {this.state.data}
renderItem={({item}) => <Text>{item}</Text>}
ItemSeparatorComponent={this.renderSeparator}
keyExtractor={(item, index) => index.toString()}
/>
A: Tried Ray's answer but then got an warning that "the key must be an string". The following modified version works well to suppress the original and this string key warning if you don't have a good unique key on the item itself:
keyExtractor={(item, index) => item + index}
Of course if you do have an obvious and good unique key on the item itself you can just use that.
A: Make sure to write return statement otherwise it will return nothing..Happened with me.
A: This worked for me:
<FlatList
data={items}
data={[{name: 'a'}, {name: 'b'}]}
keyExtractor = () => {
return new Date().getTime().toString() + (Math.floor(Math.random() * Math.floor(new Date().getTime()))).toString(); };
/>
A: This worked for me:
<FlatList
data={items}
renderItem={({ title }) => <Text>{title}</Text> }}
keyExtractor={() => Math.random().toString(36).substr(2, 9)} />
Turning the keyExtractor into a string but instead of using index use a random generated number.
When I used keyExtractor={(item, index) => index.toString()}, It never worked and still kicked out a warning. But maybe this works for someone?
|
stackoverflow
|
{
"language": "en",
"length": 600,
"provenance": "stackexchange_0000F.jsonl.gz:865744",
"question_score": "161",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545148"
}
|
17c66164260b4e5057ae5ae18c7be2a385293421
|
Stackoverflow Stackexchange
Q: How to add stylesheet dynamically in Angular 2? Is there a way to add stylesheet url or <style></style> dynamically in Angular2 ?
For example, if my variable is isModalOpened is true, I would like to add some CSS to few elements outside my root component. Like the body or html.
It's possible to do it with the DOM or jQuery but I would like to do this with Angular 2.
Possible ?
Thanks
A: You can create a <style> tag dynamically like this:
ngOnInit() {
const css = 'a {color: pink;}';
const head = document.getElementsByTagName('head')[0];
const style = document.createElement('style');
style.type = 'text/css';
style.appendChild(document.createTextNode(css));
head.appendChild(style);
}
|
Q: How to add stylesheet dynamically in Angular 2? Is there a way to add stylesheet url or <style></style> dynamically in Angular2 ?
For example, if my variable is isModalOpened is true, I would like to add some CSS to few elements outside my root component. Like the body or html.
It's possible to do it with the DOM or jQuery but I would like to do this with Angular 2.
Possible ?
Thanks
A: You can create a <style> tag dynamically like this:
ngOnInit() {
const css = 'a {color: pink;}';
const head = document.getElementsByTagName('head')[0];
const style = document.createElement('style');
style.type = 'text/css';
style.appendChild(document.createTextNode(css));
head.appendChild(style);
}
A: I am not sure you can do it to body or html, but you can do it to root component.
*
*Create a service injected to root component
*Let the service have a state ( may be BehaviorSubject )
*Access that service and change the state when isModalOpened is changed
*In root component , you will be watching this and change component parameter values
*Inside root component html , you can change class values based on the component param values
Update : Setting background color from an inner component .
app.component.css
.red{
background: red;
}
.white{
background: white;
}
.green{
background: green;
}
app.component.html
<div [ngClass]="backgroundColor" ></div>
app.component.ts
constructor(private statusService: StatusService) {
this.subscription = this.statusService.getColor()
.subscribe(color => { this.backgroundColor = color; });
}
status.service.ts
private color = new Subject<any>();
public setColor(newColor){
this.color.next(newColor);
}
public getColor(){
return this.color.asObservable();
}
child.component.ts
export class ChildComponent {
constructor(private statusService: StatusService) {}
setColor(color:string){
this.statusService.setColor(color);
}
}
So whenever we call setColor and pass a color variable such as 'red', 'green' or 'white' the background of root component changes accordingly.
A: Put all your html code in a custom directive - let's call it ngstyle...
Add your ngstyle to your page using the directive tags, in our case:
<ngstyle><ngstyle>
but let's also append the logic to your directive using ng-if so you can toggle it on or off...
<ngstyle ng-if="!isModalOpened"><ngstyle>
Now if your 'isModalOpened' is set to a scope in your controller like this:
$scope.isModalOpened = false; //or true, depends on what you need
...you can toggle it true or false many different ways which should toggle your directive on and off.
|
stackoverflow
|
{
"language": "en",
"length": 373,
"provenance": "stackexchange_0000F.jsonl.gz:865747",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545163"
}
|
69845f9607b766a2128f080e11af2a4a96973379
|
Stackoverflow Stackexchange
Q: Java - date, format, time zones and Spring Boot defaults I have simple Java object with a date field:
@JsonFormat(pattern="yyyy-MM-dd HH:mm:ss")
private Date date;
When I investigate the date with debugger I see:
Wed Jun 14 00:00:00 BST 2017
But once I return it with Spring boot controller I get:
"date": "2017-06-13 23:00:00"
*
*What's causing the difference?
*Why Java treats the date as BST?
*Does Java Date class contain time-zone information or just plain timestamp in long format?
*Is Spring boot using UTC format by default while serialising DTOs to JSON?
A: java.util.Date has no timezone information (only the long timestamp), but it uses the system's default timezone in the toString() method - you can find more info about this here (as already suggested in the comments).
Just check the value of TimeZone.getDefault(). It'll probably be Europe/London - as London in now in summer time, the short name (used by Date.toString()) of this timezone is BST.
As your output suggests, Spring is probably using UTC (as 2017-06-13 23:00:00 in UTC is 2017-06-14 00:00:00 in BST).
|
Q: Java - date, format, time zones and Spring Boot defaults I have simple Java object with a date field:
@JsonFormat(pattern="yyyy-MM-dd HH:mm:ss")
private Date date;
When I investigate the date with debugger I see:
Wed Jun 14 00:00:00 BST 2017
But once I return it with Spring boot controller I get:
"date": "2017-06-13 23:00:00"
*
*What's causing the difference?
*Why Java treats the date as BST?
*Does Java Date class contain time-zone information or just plain timestamp in long format?
*Is Spring boot using UTC format by default while serialising DTOs to JSON?
A: java.util.Date has no timezone information (only the long timestamp), but it uses the system's default timezone in the toString() method - you can find more info about this here (as already suggested in the comments).
Just check the value of TimeZone.getDefault(). It'll probably be Europe/London - as London in now in summer time, the short name (used by Date.toString()) of this timezone is BST.
As your output suggests, Spring is probably using UTC (as 2017-06-13 23:00:00 in UTC is 2017-06-14 00:00:00 in BST).
|
stackoverflow
|
{
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:865809",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545359"
}
|
e06d47e87bf92f612f8e95f4c5005a3c9331f3e8
|
Stackoverflow Stackexchange
Q: Capturing and displaying output from within Julia I looked everywhere for this so I am putting it here for the weary traveler;
Question: How do I capture the full output of a variable to a file from within a julia script?
i.e. :
#script.jl
y = f(x)
y > out.txt
A: The answer is here:
https://github.com/JuliaLang/IJulia.jl/issues/455
If you want to display the output then:
show(STDOUT, "text/plain", x)
If you want to pipe the output to a file then:
x=rand(Float32, 32,32)
f = open("log.txt", "w")
write(f, string(x))
close(f)
And for larger x or prettier output
x = rand(Float32, 1028,1028);
f = open("log.txt", "w");
writedlm(f, x);
close(f);
|
Q: Capturing and displaying output from within Julia I looked everywhere for this so I am putting it here for the weary traveler;
Question: How do I capture the full output of a variable to a file from within a julia script?
i.e. :
#script.jl
y = f(x)
y > out.txt
A: The answer is here:
https://github.com/JuliaLang/IJulia.jl/issues/455
If you want to display the output then:
show(STDOUT, "text/plain", x)
If you want to pipe the output to a file then:
x=rand(Float32, 32,32)
f = open("log.txt", "w")
write(f, string(x))
close(f)
And for larger x or prettier output
x = rand(Float32, 1028,1028);
f = open("log.txt", "w");
writedlm(f, x);
close(f);
|
stackoverflow
|
{
"language": "en",
"length": 106,
"provenance": "stackexchange_0000F.jsonl.gz:865869",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545564"
}
|
0e516b649f5ec0b19977e57d623ef4c2f14930c3
|
Stackoverflow Stackexchange
Q: Tabula-py - pages argument tabula.convert_into(filename_final, (filename_zero + '.csv'),
output_format="csv", pages="all")
How would I go about converting just pages 2 through the end? The "area" changes for the convert from page 1 through the rest of the pages.
I am using the Python wrapper tabula-py
Thanks in advance!
A: According to the README, the pages argument can be:
pages (str, int, list of int, optional)
- An optional values specifying pages to extract from.
- It allows str, int, list of int.
Example: 1, '1-2,3', 'all' or [1,2]. Default is 1
So I guess you can try something like '2-99999'.
|
Q: Tabula-py - pages argument tabula.convert_into(filename_final, (filename_zero + '.csv'),
output_format="csv", pages="all")
How would I go about converting just pages 2 through the end? The "area" changes for the convert from page 1 through the rest of the pages.
I am using the Python wrapper tabula-py
Thanks in advance!
A: According to the README, the pages argument can be:
pages (str, int, list of int, optional)
- An optional values specifying pages to extract from.
- It allows str, int, list of int.
Example: 1, '1-2,3', 'all' or [1,2]. Default is 1
So I guess you can try something like '2-99999'.
A: Tabula-py - pages argument
from tabula import convert_into
table_file = r"Table.pdf"
output_csv = r"Op.csv"
#area[] have predefined area for each page from page number 2
for i in range(2, str(len(table_file))):
j = i-2
convert_into(table_file, output_csv, output_format='csv', lattice=False, stream=True, area=area[j], pages=i)
|
stackoverflow
|
{
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:865902",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545639"
}
|
22584ec6577ed04a3dc1dbfe1378c166dec70db2
|
Stackoverflow Stackexchange
Q: AVAudioEngine streaming audio from remote url Is it possible to use AVAudioEngine to stream audio from remote url? I see that I can create AVAudioPCMBuffer and then schedule it on AVAudioPlayerNode, but how can I fill AVAudioPCMBuffer with data from stream?
|
Q: AVAudioEngine streaming audio from remote url Is it possible to use AVAudioEngine to stream audio from remote url? I see that I can create AVAudioPCMBuffer and then schedule it on AVAudioPlayerNode, but how can I fill AVAudioPCMBuffer with data from stream?
|
stackoverflow
|
{
"language": "en",
"length": 42,
"provenance": "stackexchange_0000F.jsonl.gz:865930",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545736"
}
|
5c8a92ebbf88aead7671d0f85e32d073c5e91252
|
Stackoverflow Stackexchange
Q: In pandas, how to concatenate horizontally and then remove the redundant columns Say I have two dataframes.
DF1:
col1, col2, col3,
DF2: col2, col4, col5
How do I concatenate the two dataframes horizontally and have the col1, col2, col3, col4, and col5? Right now, I am doing pd.concat([DF1, DF2], axis = 1) but it ends up having two col2's. Assuming all the values inside the two col2 are the same, I want to have only one columns.
A: Dropping duplicates should work. Because drop_duplicates only works with index, we need to transpose the DF to drop duplicates and transpose it back.
pd.concat([DF1, DF2], axis = 1).T.drop_duplicates().T
|
Q: In pandas, how to concatenate horizontally and then remove the redundant columns Say I have two dataframes.
DF1:
col1, col2, col3,
DF2: col2, col4, col5
How do I concatenate the two dataframes horizontally and have the col1, col2, col3, col4, and col5? Right now, I am doing pd.concat([DF1, DF2], axis = 1) but it ends up having two col2's. Assuming all the values inside the two col2 are the same, I want to have only one columns.
A: Dropping duplicates should work. Because drop_duplicates only works with index, we need to transpose the DF to drop duplicates and transpose it back.
pd.concat([DF1, DF2], axis = 1).T.drop_duplicates().T
A: To avoid duplication of the columns while joining two data frames use the ignore_index argument.
pd.concat([df1, df2], ignore_index=True, sort=False)
But use it only if wish to append them and ignore the fact that they may have overlapping indexes
A: Use difference for columns from DF2 which are not in DF1 and simple select them by []:
DF1 = pd.DataFrame(columns=['col1', 'col2', 'col3'])
DF2 = pd.DataFrame(columns=['col2', 'col4', 'col5'])
DF2 = DF2[DF2.columns.difference(DF1.columns)]
print (DF2)
Empty DataFrame
Columns: [col4, col5]
Index: []
print (pd.concat([DF1, DF2], axis = 1))
Empty DataFrame
Columns: [col1, col2, col3, col4, col5]
Index: []
Timings:
np.random.seed(123)
N = 1000
DF1 = pd.DataFrame(np.random.rand(N,3), columns=['col1', 'col2', 'col3'])
DF2 = pd.DataFrame(np.random.rand(N,3), columns=['col2', 'col4', 'col5'])
DF2['col2'] = DF1['col2']
In [408]: %timeit (pd.concat([DF1, DF2], axis = 1).T.drop_duplicates().T)
10 loops, best of 3: 122 ms per loop
In [409]: %timeit (pd.concat([DF1, DF2[DF2.columns.difference(DF1.columns)]], axis = 1))
1000 loops, best of 3: 979 µs per loop
N = 10000:
In [411]: %timeit (pd.concat([DF1, DF2], axis = 1).T.drop_duplicates().T)
1 loop, best of 3: 1.4 s per loop
In [412]: %timeit (pd.concat([DF1, DF2[DF2.columns.difference(DF1.columns)]], axis = 1))
1000 loops, best of 3: 1.12 ms per loop
A: DF2.drop(DF2.columns[DF2.columns.isin(DF1.columns)],axis=1,inplace=True)
Then,
pd.concat([DF1, DF2], axis = 1)
|
stackoverflow
|
{
"language": "en",
"length": 301,
"provenance": "stackexchange_0000F.jsonl.gz:865986",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44545921"
}
|
e56de59bdf42fb8885254c7f152bd43cf0ab99a4
|
Stackoverflow Stackexchange
Q: presentScene(... withTransition: SKTransition.fadeWithColor(customColor...) doesn't work with color with patternImage I want to present SCNScene with SKTransition.fadeWithColor. It works good with standart colors like UIColor.greenColor(), but id does not work with color made with patternImage. Transition in this case is just transparent. Here is the code:
guard let patternImage = UIImage(named: "pattern") else {return}
let patternColor = UIColor(patternImage: patternImage)
scnView.presentScene(scene, withTransition: SKTransition.fadeWithColor(patternColor, duration: 1), incomingPointOfView: nil) {
...
}
Same color is used to fill the background of view and it works like a charm, but not with transition.
So the questions are:
*
*Why color becomes transparent during transition?
*Is there a way to make it work with such a color?
*If no - what is the alternative solution for making fade-like or dissolve-like transition (scene1 -> Image -> scene2)?
Thanks in advance!
|
Q: presentScene(... withTransition: SKTransition.fadeWithColor(customColor...) doesn't work with color with patternImage I want to present SCNScene with SKTransition.fadeWithColor. It works good with standart colors like UIColor.greenColor(), but id does not work with color made with patternImage. Transition in this case is just transparent. Here is the code:
guard let patternImage = UIImage(named: "pattern") else {return}
let patternColor = UIColor(patternImage: patternImage)
scnView.presentScene(scene, withTransition: SKTransition.fadeWithColor(patternColor, duration: 1), incomingPointOfView: nil) {
...
}
Same color is used to fill the background of view and it works like a charm, but not with transition.
So the questions are:
*
*Why color becomes transparent during transition?
*Is there a way to make it work with such a color?
*If no - what is the alternative solution for making fade-like or dissolve-like transition (scene1 -> Image -> scene2)?
Thanks in advance!
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:866037",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546069"
}
|
7280044706008574232320fb78041cb8e226e22a
|
Stackoverflow Stackexchange
Q: ASP.NET Core memory consumption while uploading a file to Azure Blob When I upload a file to an Azure Blob, the memory consumption seems quite high. The data below is from a 200 MB file upload.
public async Task<IActionResult> PostFile(IFormFile file)
{
var filePath = Path.GetTempFileName();
using (var stream = new FileStream(filePath, FileMode.Create))
{
-> Memory used: 108 MB
await file.CopyToAsync(stream);
-> Memory used: 308 MB
await blockBlob.UploadFromStreamAsync(stream);
-> Memory used: 988 MB
}
}
Since the file is already loaded in the stream, I cannot understand the sharp increase in consumed memory caused by UploadFromStreamAsync(). I am using Microsoft.WindowsAzure.Storage 8.1.4 and .NET Core 1.1.
Am I doing anything wrong or is this expected behavior?
A: My test application is MVC application, so it just takes 243 memory usage.
Web api application isn't as same as MVC application memory usage.
My web api application usage is 1G as below image shows:
Am I doing anything wrong or is this expected behavior?
This is the expected behavior.
I suggest you could use take snapshot to see the heap size change information.
You could see the heap size is release but the memory usage is still 1G.
|
Q: ASP.NET Core memory consumption while uploading a file to Azure Blob When I upload a file to an Azure Blob, the memory consumption seems quite high. The data below is from a 200 MB file upload.
public async Task<IActionResult> PostFile(IFormFile file)
{
var filePath = Path.GetTempFileName();
using (var stream = new FileStream(filePath, FileMode.Create))
{
-> Memory used: 108 MB
await file.CopyToAsync(stream);
-> Memory used: 308 MB
await blockBlob.UploadFromStreamAsync(stream);
-> Memory used: 988 MB
}
}
Since the file is already loaded in the stream, I cannot understand the sharp increase in consumed memory caused by UploadFromStreamAsync(). I am using Microsoft.WindowsAzure.Storage 8.1.4 and .NET Core 1.1.
Am I doing anything wrong or is this expected behavior?
A: My test application is MVC application, so it just takes 243 memory usage.
Web api application isn't as same as MVC application memory usage.
My web api application usage is 1G as below image shows:
Am I doing anything wrong or is this expected behavior?
This is the expected behavior.
I suggest you could use take snapshot to see the heap size change information.
You could see the heap size is release but the memory usage is still 1G.
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:866039",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546073"
}
|
a6e8f10e8841c6f3dd7f4150158faf2712e19155
|
Stackoverflow Stackexchange
Q: Can a Blank Node have rdf:type property? Is it valid to form the following triple:
_:bn rdf:type foaf:name
where _:bn is a blank node?
I read the W3C standards for rdf:type. It says that the rdfs:domain of rdf:type is rdfs:resource. rdfs:resource is the the class of everything.
So is it correct to assign a rdf:type for a blank node?
A: Yes, it's absolutely fine. Blank nodes are simply things without a URL identifier. (Well it's a little more complex, but I wouldn't worry about it)
Like a car without a registration plate, it doesn't stop them doing anything cars with plates can do.
But it makes life difficult for people trying to work out whether they've seen the same car, or find the car.
|
Q: Can a Blank Node have rdf:type property? Is it valid to form the following triple:
_:bn rdf:type foaf:name
where _:bn is a blank node?
I read the W3C standards for rdf:type. It says that the rdfs:domain of rdf:type is rdfs:resource. rdfs:resource is the the class of everything.
So is it correct to assign a rdf:type for a blank node?
A: Yes, it's absolutely fine. Blank nodes are simply things without a URL identifier. (Well it's a little more complex, but I wouldn't worry about it)
Like a car without a registration plate, it doesn't stop them doing anything cars with plates can do.
But it makes life difficult for people trying to work out whether they've seen the same car, or find the car.
A: For a more generic answer, a blank node is a purely abstract "thing" being provided by a graph database.
Giving a type to a node means creating a triple, in other words providing a property to a subject.
subject= my blank node
property=rdf:type
object=foaf:name
A blank node might not have an identifier but starts being a concrete "thing" when it becomes the subject of properties.
The absence of an identifier is not definitely a problem for considering the node to be a resource as it might later be associated to the node via another triple. e.g. (using the appropriate Dublin Core property for an identifier) :
subject= my blank node
property=http://purl.org/dc/terms/subject
object=http://my-testdomain.com/my-url
|
stackoverflow
|
{
"language": "en",
"length": 238,
"provenance": "stackexchange_0000F.jsonl.gz:866049",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546102"
}
|
5f392863ce35f10a721131da872551a8cc4c9099
|
Stackoverflow Stackexchange
Q: how to use substring in ef query within where and select statement Hi there how to use substring in EF query within where and select statements?
my query does not count any record from database to execute
but in database having 13359 records
My query is below
string subautoassettype, subautocat, subautocat1 ;
string autoidstring;
autocat = cmbcategory.Text;
autocat1 = cmbcategory1.Text;
autocat2 = cmbcategory2.Text;
subautoassettype = autocat.Substring(0, 3);
subautocat = autocat1.Substring(0, 3);
subautocat1 = autocat2.Substring(0, 3);
autoidstring = subautoassettype + subautocat + subautocat1 + "-";
var varmaxidcheck = cnx.item_master.Where(c => c.Item_ID.Substring(0, 9) == autoidstring)
.Select(gd => new { gd.Item_ID }).ToList();
if (varmaxidcheck.Count > 0)
{
foreach (var it11 in varmaxidcheck)
{
string kij = it11.Item_ID;
}
}
Sample Output expect :
cnx.item_master(where c=> c.FixBooChe-0013346 == FixBooChe-).Select(fg=>fg.Item_ID);
Pls help me to solve this issue
Also I have include SQL query
select max(substring(Item_ID,11,6)) from item_master where substring(Item_ID,1,9) = 'FixEleCre';
A: Use Contains instead of ==:
var varmaxidcheck = cnx.item_master.Where(c => c.Item_ID.Substring(0, 9)
.Contains(autoidstring))
.Select(gd => new { gd.Item_ID })
.ToList();
Because probably you don't have the exact match strings in your Where clause.
|
Q: how to use substring in ef query within where and select statement Hi there how to use substring in EF query within where and select statements?
my query does not count any record from database to execute
but in database having 13359 records
My query is below
string subautoassettype, subautocat, subautocat1 ;
string autoidstring;
autocat = cmbcategory.Text;
autocat1 = cmbcategory1.Text;
autocat2 = cmbcategory2.Text;
subautoassettype = autocat.Substring(0, 3);
subautocat = autocat1.Substring(0, 3);
subautocat1 = autocat2.Substring(0, 3);
autoidstring = subautoassettype + subautocat + subautocat1 + "-";
var varmaxidcheck = cnx.item_master.Where(c => c.Item_ID.Substring(0, 9) == autoidstring)
.Select(gd => new { gd.Item_ID }).ToList();
if (varmaxidcheck.Count > 0)
{
foreach (var it11 in varmaxidcheck)
{
string kij = it11.Item_ID;
}
}
Sample Output expect :
cnx.item_master(where c=> c.FixBooChe-0013346 == FixBooChe-).Select(fg=>fg.Item_ID);
Pls help me to solve this issue
Also I have include SQL query
select max(substring(Item_ID,11,6)) from item_master where substring(Item_ID,1,9) = 'FixEleCre';
A: Use Contains instead of ==:
var varmaxidcheck = cnx.item_master.Where(c => c.Item_ID.Substring(0, 9)
.Contains(autoidstring))
.Select(gd => new { gd.Item_ID })
.ToList();
Because probably you don't have the exact match strings in your Where clause.
|
stackoverflow
|
{
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:866051",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546105"
}
|
653556b5f4fc92f4b807a802cf549dd51631bb20
|
Stackoverflow Stackexchange
Q: How to query LDAP group membership with curl? I would like to use curl on the command line to check if a $USER is a member of the LDAP group $GROUP.
This works:
curl --user $CREDS \
"ldaps://ldap.foo.com/DC=ads,DC=foo,DC=com??sub?(sAMAccountName=$USER)" \
| grep -a "memberOf: CN=$GROUP,OU=Distribution,OU=Groups,DC=ads,DC=foo,DC=com"
Unfortunately, that call takes quite some time and it returns a lot of info that I am not interested in. Do you know if a more efficient way exists?
A: You could try :
curl --user $CREDS \
"ldaps://ldap.foo.com/DC=ads,DC=foo,DC=com?memberOf?sub?(&(sAMAccountName=$USER)(memberOf=CN=$GROUP,OU=Distribution,OU=Groups,DC=ads,DC=foo,DC=com))"
Which will
*
*For the filter : retrieve only users who have sAMAccountName=$USER AND memberOf=CN=$GROUP,OU=Distribution,OU=Groups,DC=ads,DC=foo,DC=com (it will make the filtering server side than with your grep command on all the users attributes)
*For the memberOf addition (before the ?sub) specify that you want only the memberOf attribute to be retrieved.
If the filter change did the trick, try to just retrieve the dn for example to limit the ouput, because if no attribute is specified, every attributes are returned
For more information : https://docs.oracle.com/cd/E19396-01/817-7616/ldurl.html
|
Q: How to query LDAP group membership with curl? I would like to use curl on the command line to check if a $USER is a member of the LDAP group $GROUP.
This works:
curl --user $CREDS \
"ldaps://ldap.foo.com/DC=ads,DC=foo,DC=com??sub?(sAMAccountName=$USER)" \
| grep -a "memberOf: CN=$GROUP,OU=Distribution,OU=Groups,DC=ads,DC=foo,DC=com"
Unfortunately, that call takes quite some time and it returns a lot of info that I am not interested in. Do you know if a more efficient way exists?
A: You could try :
curl --user $CREDS \
"ldaps://ldap.foo.com/DC=ads,DC=foo,DC=com?memberOf?sub?(&(sAMAccountName=$USER)(memberOf=CN=$GROUP,OU=Distribution,OU=Groups,DC=ads,DC=foo,DC=com))"
Which will
*
*For the filter : retrieve only users who have sAMAccountName=$USER AND memberOf=CN=$GROUP,OU=Distribution,OU=Groups,DC=ads,DC=foo,DC=com (it will make the filtering server side than with your grep command on all the users attributes)
*For the memberOf addition (before the ?sub) specify that you want only the memberOf attribute to be retrieved.
If the filter change did the trick, try to just retrieve the dn for example to limit the ouput, because if no attribute is specified, every attributes are returned
For more information : https://docs.oracle.com/cd/E19396-01/817-7616/ldurl.html
|
stackoverflow
|
{
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:866056",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546116"
}
|
c208348d9cd7fe81487ac615b00f515a1c26ebe2
|
Stackoverflow Stackexchange
Q: What is the relationship between OStatus, pump.io and ActivityPub? My understanding is that:
*
*OStatus is a decentralized social networking protocol made up of several other protocols (Atom feeds, Activity Streams, PubSubHubbub, Salmon, and WebFinger)
*
*GNU Social and Mastodon are two server software applications that implement OStatus
*pump.io API is an interface to the pump.io server software (Activity Streams, OAuth, Web Host Metadata)
*
*identi.ca is a pump.io instance (not accessible right now), GNU MediaGoblin is a server application that currently uses a pump-like API
*ActivityPub is a proposed decentralized social networking protocol
*
*GNU MediaGoblin is a server application that will likely implement ActivityPub
How do these protocols interoperate? Does ActivityPub completely replace OStatus, or only the Activity Streams component?
A: They are 3 different protocols that don't inter-operate. Though some software can communicate with 2 or more. Mastodon for example falls back to OStatus if ActivityPub does not work.
And so in that sense, to respond to your question, ActivityPub completely replaces OStatus.
|
Q: What is the relationship between OStatus, pump.io and ActivityPub? My understanding is that:
*
*OStatus is a decentralized social networking protocol made up of several other protocols (Atom feeds, Activity Streams, PubSubHubbub, Salmon, and WebFinger)
*
*GNU Social and Mastodon are two server software applications that implement OStatus
*pump.io API is an interface to the pump.io server software (Activity Streams, OAuth, Web Host Metadata)
*
*identi.ca is a pump.io instance (not accessible right now), GNU MediaGoblin is a server application that currently uses a pump-like API
*ActivityPub is a proposed decentralized social networking protocol
*
*GNU MediaGoblin is a server application that will likely implement ActivityPub
How do these protocols interoperate? Does ActivityPub completely replace OStatus, or only the Activity Streams component?
A: They are 3 different protocols that don't inter-operate. Though some software can communicate with 2 or more. Mastodon for example falls back to OStatus if ActivityPub does not work.
And so in that sense, to respond to your question, ActivityPub completely replaces OStatus.
A: OStatus is a decentralized social networking protocol which - as you say - is made up of several other protocols: Atom feeds, Activity Streams (version 1.0), PubSubHubbub, Salmon, and WebFinger.
*
*It is still used by Friendica and GNU Social (formerly StatusNet).
*It is no longer used by Mastodon. Support was removed in 2019 in favor of ActivityPub.
pump.io is an engine with an API that exposes Activity Streams (version 1.0). Pump.io was meant as a successor to StatusNet.
*
*Identi.ca switched from StatusNet to pump.io in 2013.
*Pump.io intends to deprecate their API and move to ActivityPub (see Developer docs).
Activity Streams is for the serialization of a stream of social activities using the JSON(-LD) format.
*
*Version 1.0 was created by a working group that had Google, Facebook and Microsoft backing. It uses JSON as serialization format.
*Version 2.0 was a sanitized version derived from 1.0 and uses JSON-LD as serialization format. It has become a W3C Recommendation that comes in two parts: Core and Vocabulary.
ActivityPub is a decentralized social networking protocol that is based upon Activity Streams 2.0 and it is the basis of the Fediverse. It is also a W3C Recommendation.
*
*The ActivityPub specification is intentionally incomplete and flexible in a number of places. In order to create full-blown fediverse apps it should be combined with:
*
*Webfinger (to find federated accounts)
*HTTP- and/or JSON-LD Signatures (for server-2-server communication)
*OAuth 2.0 (client credentials, authorization scopes).
*For a long and ever-growing list of ActivityPub applications see the Feneas ActivityPub Watchlist.
So in summary OStatus, pump.io API and ActivityPub are three separate incompatible means to create federated social applications (that have nonetheless some common denominators). Of these ActivityPub offers the best way forward, and is the protocol you should choose from this list going forward.
PS. The best places to ask questions as an ActivityPub implementer are the SocialHub and Feneas forums. And see also the Guide for new ActivityPub implementers at SocialHub.
|
stackoverflow
|
{
"language": "en",
"length": 495,
"provenance": "stackexchange_0000F.jsonl.gz:866110",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546232"
}
|
feb44cf2422eae45defaa77ffd996432bbfd2ff1
|
Stackoverflow Stackexchange
Q: Jenkins run job in local docker image Hi I have a Jenkins where I want to run a job in a remote machine using a docker image that I already have on that machine since my cluster does not have any internet access.
I have seen Jenkins docker plugin, but it seems it´s using a docker repository to get the images.
My question is, which is the best Jenkins plugin to acomplish this?. Run a Jenkins job in a docker container using an image in the machine where the job is running?
Regards.
|
Q: Jenkins run job in local docker image Hi I have a Jenkins where I want to run a job in a remote machine using a docker image that I already have on that machine since my cluster does not have any internet access.
I have seen Jenkins docker plugin, but it seems it´s using a docker repository to get the images.
My question is, which is the best Jenkins plugin to acomplish this?. Run a Jenkins job in a docker container using an image in the machine where the job is running?
Regards.
|
stackoverflow
|
{
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:866149",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546366"
}
|
d396c1c806ac8b86312b1ad9ce0e4ee4d6a953a9
|
Stackoverflow Stackexchange
Q: Angular 4 extend and implement I have a class Hero which needs to extend GoogleCharts class. I also need to implement OnInit to get the some data from params.
How can I do this?
A: Just like this:
export class Hero extends GoogleCharts implements OnInit {...
If GoogleCharts already implements OnInit you should call super.ngOnInit(); before doing other stuff in your ngOnInit method.
Like this:
interface OnInit {
ngOnInit: () => void;
}
class GoogleCharts implements OnInit{
ngOnInit() {
//does some stuff here
}
}
class Hero extends GoogleCharts implements OnInit{
ngOnInit() {
super.ngOnInit();
//do my stuff here
}
}
|
Q: Angular 4 extend and implement I have a class Hero which needs to extend GoogleCharts class. I also need to implement OnInit to get the some data from params.
How can I do this?
A: Just like this:
export class Hero extends GoogleCharts implements OnInit {...
If GoogleCharts already implements OnInit you should call super.ngOnInit(); before doing other stuff in your ngOnInit method.
Like this:
interface OnInit {
ngOnInit: () => void;
}
class GoogleCharts implements OnInit{
ngOnInit() {
//does some stuff here
}
}
class Hero extends GoogleCharts implements OnInit{
ngOnInit() {
super.ngOnInit();
//do my stuff here
}
}
|
stackoverflow
|
{
"language": "en",
"length": 101,
"provenance": "stackexchange_0000F.jsonl.gz:866184",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546471"
}
|
a7f8a364d3bdedbea6961c539da5275367fd7cda
|
Stackoverflow Stackexchange
Q: Cast IEnumerable to runtime type How can achieve something similar to listOfBaseItems.Cast<Child>() using a type defined in runtime? e.g.
var t = typeof(Child); // the type would be a method argument in my case
var desiredType = typeof(List<>).MakeGenericType(t);
var castedList = Convert.ChangeType(listOfBaseItems, desiredType);
I get an exception that the items doesn't implement IConvertible. What am I missing?
A: Assuming that the cast is legal (e.g. listOfBaseItems actually contains child items), then you can invoke Cast (which is a generic extension method in the Enumerable class) at runtime like this:
var result =
typeof(Enumerable)
.GetMethod("Cast")
.MakeGenericMethod(t)
.Invoke(null, new object[] {listOfBaseItems});
|
Q: Cast IEnumerable to runtime type How can achieve something similar to listOfBaseItems.Cast<Child>() using a type defined in runtime? e.g.
var t = typeof(Child); // the type would be a method argument in my case
var desiredType = typeof(List<>).MakeGenericType(t);
var castedList = Convert.ChangeType(listOfBaseItems, desiredType);
I get an exception that the items doesn't implement IConvertible. What am I missing?
A: Assuming that the cast is legal (e.g. listOfBaseItems actually contains child items), then you can invoke Cast (which is a generic extension method in the Enumerable class) at runtime like this:
var result =
typeof(Enumerable)
.GetMethod("Cast")
.MakeGenericMethod(t)
.Invoke(null, new object[] {listOfBaseItems});
|
stackoverflow
|
{
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:866194",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546513"
}
|
26fac974aac4cbdf4d337d1298f6e9a4907cc631
|
Stackoverflow Stackexchange
Q: CSS image width 100%, but still has some background I have the following website:
www.thewhozoo.com
It works fine on a desk top, but for some reason on a mobile device's browser, it has a grey bar down the side.
I have the following code:
css
body {
background-color: #4B5961;
width: 100%;
margin: 0;
}
.top-container {
float: left;
width: 100%;
background: linear-gradient( rgba(0,0,0,0.1), rgba(0, 0, 0, 0.1) ),url('../images/background1.jpg') no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
html
<body>
<div id="image-head" class="top-container">
The grey line down the side is the same as the background color of the body (#4B5961).
As you can see, I have the body width and the background image width both set at 100%. So I would not expect to see the grey line. I think it is a result of the scroll bar.
If anyone can advise how I can remove this, I would appreciate the help.
A: Check your style this css rule, take out the padding-left: 10px;:
.wz-title {
color: #B2D137;
font-weight: bold;
/* padding-left: 10px; */
font-size: 110%;
text-shadow: 0px 0px 10px rgba(0,0,0,0.7), 0px 0px 1px rgba(0,0,0,0.4);
}
|
Q: CSS image width 100%, but still has some background I have the following website:
www.thewhozoo.com
It works fine on a desk top, but for some reason on a mobile device's browser, it has a grey bar down the side.
I have the following code:
css
body {
background-color: #4B5961;
width: 100%;
margin: 0;
}
.top-container {
float: left;
width: 100%;
background: linear-gradient( rgba(0,0,0,0.1), rgba(0, 0, 0, 0.1) ),url('../images/background1.jpg') no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
html
<body>
<div id="image-head" class="top-container">
The grey line down the side is the same as the background color of the body (#4B5961).
As you can see, I have the body width and the background image width both set at 100%. So I would not expect to see the grey line. I think it is a result of the scroll bar.
If anyone can advise how I can remove this, I would appreciate the help.
A: Check your style this css rule, take out the padding-left: 10px;:
.wz-title {
color: #B2D137;
font-weight: bold;
/* padding-left: 10px; */
font-size: 110%;
text-shadow: 0px 0px 10px rgba(0,0,0,0.7), 0px 0px 1px rgba(0,0,0,0.4);
}
A: Remove the padding with .wz-title. For some reason, removing padding fixes it.
.wz-title {
padding-left: 0;
}
Here:
A: Set overflow-x:hidden; on your body, that'll fix it:
body {
overflow-x:hidden;
}
A: Try this:
body, body * {
box-sizing: border-box;
}
By using above code you will never face any problem with padding ever. :)
|
stackoverflow
|
{
"language": "en",
"length": 246,
"provenance": "stackexchange_0000F.jsonl.gz:866212",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546569"
}
|
25d1a61662a3206daf2f34418c7c14d25a874a59
|
Stackoverflow Stackexchange
Q: ASP.Net Web Api how to change token expiration time in runtime For our ASP.Net application, we have recently implemented Web API usage. We use bearer token authentication. Our customer has ordered the opportunity to change token expiration time on the settings page and store it in the database. There is no problem to get it from database and change time in Startup.Auth.cs
static Startup()
{
SettingsService _settingService = EngineContext.Current.Resolve<SettingsService>();
var hostSettings = _settingService.GetHostSettings();
OAuthServerOptions = new OAuthAuthorizationServerOptions()
{
AllowInsecureHttp = true,
TokenEndpointPath = new PathString("/token"),
AccessTokenExpireTimeSpan = TimeSpan.FromHours(hostSettings.ApiTokenExpirationTimeInHours),
Provider = new ApplicationOAuthProvider()
};
}
But we have to restart the application each time we want to modify expiration time. How to modify AccessTokenExpireTimeSpan dynamically without app restart?
A: The solution of this problem was quite easy. OAuthServerOptions property was made as static public property with private setter
public static OAuthAuthorizationServerOptions OAuthServerOptions { get; private set; }
So OAuth options are available outside of Startup class. And so when host settings are changed, expiration time can be freely changed
Startup.OAuthServerOptions.AccessTokenExpireTimeSpan = TimeSpan.FromHours(hostSettings.ApiTokenExpirationTimeInHours);
|
Q: ASP.Net Web Api how to change token expiration time in runtime For our ASP.Net application, we have recently implemented Web API usage. We use bearer token authentication. Our customer has ordered the opportunity to change token expiration time on the settings page and store it in the database. There is no problem to get it from database and change time in Startup.Auth.cs
static Startup()
{
SettingsService _settingService = EngineContext.Current.Resolve<SettingsService>();
var hostSettings = _settingService.GetHostSettings();
OAuthServerOptions = new OAuthAuthorizationServerOptions()
{
AllowInsecureHttp = true,
TokenEndpointPath = new PathString("/token"),
AccessTokenExpireTimeSpan = TimeSpan.FromHours(hostSettings.ApiTokenExpirationTimeInHours),
Provider = new ApplicationOAuthProvider()
};
}
But we have to restart the application each time we want to modify expiration time. How to modify AccessTokenExpireTimeSpan dynamically without app restart?
A: The solution of this problem was quite easy. OAuthServerOptions property was made as static public property with private setter
public static OAuthAuthorizationServerOptions OAuthServerOptions { get; private set; }
So OAuth options are available outside of Startup class. And so when host settings are changed, expiration time can be freely changed
Startup.OAuthServerOptions.AccessTokenExpireTimeSpan = TimeSpan.FromHours(hostSettings.ApiTokenExpirationTimeInHours);
|
stackoverflow
|
{
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:866213",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546570"
}
|
198e2f1ba321fbe4ba36629c8dab8cd6b5245443
|
Stackoverflow Stackexchange
Q: @HostListener cause change detection triggers too many times when I'm listening for outside click I have next template of root component, that draws 9 tiles:
<ul>
<li *ngFor="let x of [0,1,2,3,4,5,6,7,8]">
<tile></tile>
</li>
</ul>
and next tile component, where I added HostListener for document click:
import {AfterViewChecked, Component, HostListener} from '@angular/core';
@Component({
selector: 'tile',
template: '<p>tile works!</p>'
})
export class TileComponent implements AfterViewChecked {
ngAfterViewChecked(): void {
console.log('checked');
}
@HostListener('document:click', ['$event'])
onOutsideClick(event: any): void {
// do nothing ...
}
}
Plunker: http://plnkr.co/edit/7wvon25LhXkHQiMcwh48?p=preview
When I run this I see that on each click change detection was called 9^2 times:
I can't understand why.
Can somebody explain to me why change detection triggers n^2 times in this case?
A: Short answer - That is by design.
Since we have a click handler, angular triggers change detection after handler been called.
So, when the first component handles click it cause change detection. Then all the components print "checked".
And that repeated for each component, so I've got 9^2 prints "checked."
And one additional note that OnPush strategy will not help to reduce an amount of prints.
|
Q: @HostListener cause change detection triggers too many times when I'm listening for outside click I have next template of root component, that draws 9 tiles:
<ul>
<li *ngFor="let x of [0,1,2,3,4,5,6,7,8]">
<tile></tile>
</li>
</ul>
and next tile component, where I added HostListener for document click:
import {AfterViewChecked, Component, HostListener} from '@angular/core';
@Component({
selector: 'tile',
template: '<p>tile works!</p>'
})
export class TileComponent implements AfterViewChecked {
ngAfterViewChecked(): void {
console.log('checked');
}
@HostListener('document:click', ['$event'])
onOutsideClick(event: any): void {
// do nothing ...
}
}
Plunker: http://plnkr.co/edit/7wvon25LhXkHQiMcwh48?p=preview
When I run this I see that on each click change detection was called 9^2 times:
I can't understand why.
Can somebody explain to me why change detection triggers n^2 times in this case?
A: Short answer - That is by design.
Since we have a click handler, angular triggers change detection after handler been called.
So, when the first component handles click it cause change detection. Then all the components print "checked".
And that repeated for each component, so I've got 9^2 prints "checked."
And one additional note that OnPush strategy will not help to reduce an amount of prints.
A: @Hostlistener can be costly. Check my answer here to minimize the effect and improve performance.
Detect click outside Angular component
|
stackoverflow
|
{
"language": "en",
"length": 206,
"provenance": "stackexchange_0000F.jsonl.gz:866228",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546611"
}
|
77d29a13392816554cdc58ba9673a92552a7f70b
|
Stackoverflow Stackexchange
Q: Angular 4 and GoldenLayout - change detection doesn't work I'am using Golden Layout with my app created using Angular 4 (via angular cli). Golden Layout works well. It shows up, and all components are also in place. But since my components are placed in GL, the angular change detection mechanism stopped working.
Have anyone had such problems?
|
Q: Angular 4 and GoldenLayout - change detection doesn't work I'am using Golden Layout with my app created using Angular 4 (via angular cli). Golden Layout works well. It shows up, and all components are also in place. But since my components are placed in GL, the angular change detection mechanism stopped working.
Have anyone had such problems?
|
stackoverflow
|
{
"language": "en",
"length": 58,
"provenance": "stackexchange_0000F.jsonl.gz:866233",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546637"
}
|
2e878df4202a475a81e21e652ff1aaf7d32043a0
|
Stackoverflow Stackexchange
Q: Portion of label bold - the rest not bold I'm using d3.js and the following snippet renders labels around a pie chart:
text.enter()
.append("text")
.attr("dy", ".35em")
.style("opacity", 0.01)
.text(function(d) {
return (d.data.NAME + ": " + d.data.amount);
});
So a label might read Jimmy: 100
How do I make d.data.NAME render in a bold style but d.data.amount should not be bold ?
A: One solution is using a <tspan> element with a different font-weight for your d.data.amount.
Check the demo:
var svg = d3.select("svg");
var text = svg.append("text")
.attr("x", 10)
.attr("y", 30)
.attr("font-weight", 700)
.text("This is bold...")
.append("tspan")
.attr("font-weight", 300)
.text(" but this is not.")
<script src="https://d3js.org/d3.v4.min.js"></script>
<svg></svg>
In your case, it should be something like this:
//...
.style("font-weight", 700)
.text(function(d) {
return d.data.NAME + ": ";
})
.append("tspan")
.style("font-weight", 300)
.text(function(d) {
return d.data.amount;
});
|
Q: Portion of label bold - the rest not bold I'm using d3.js and the following snippet renders labels around a pie chart:
text.enter()
.append("text")
.attr("dy", ".35em")
.style("opacity", 0.01)
.text(function(d) {
return (d.data.NAME + ": " + d.data.amount);
});
So a label might read Jimmy: 100
How do I make d.data.NAME render in a bold style but d.data.amount should not be bold ?
A: One solution is using a <tspan> element with a different font-weight for your d.data.amount.
Check the demo:
var svg = d3.select("svg");
var text = svg.append("text")
.attr("x", 10)
.attr("y", 30)
.attr("font-weight", 700)
.text("This is bold...")
.append("tspan")
.attr("font-weight", 300)
.text(" but this is not.")
<script src="https://d3js.org/d3.v4.min.js"></script>
<svg></svg>
In your case, it should be something like this:
//...
.style("font-weight", 700)
.text(function(d) {
return d.data.NAME + ": ";
})
.append("tspan")
.style("font-weight", 300)
.text(function(d) {
return d.data.amount;
});
|
stackoverflow
|
{
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:866240",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546656"
}
|
f653466bb0845ea8c84dbc90704da181df32835e
|
Stackoverflow Stackexchange
Q: Xamarin stepping over an awaitable method In a WPF app stepping over (F10) an awaitable method takes you to the next line, But in a xamarin android project it does not behave that way (it's as if I pressed F5) and I'm obliged to put a break point on the next line in order to debug properly -It's a pain in the ass-.
async Task SomeMethod()
{
await Task.Delay(1000); <--------- Stepping over this line leaves the function.
int x = 1; <--------- I have to add a breakpoint here.
}
Is it a bug or a feature?
PS: I'm using Visual Studio 2017.
A: This is exactly how await operator works. When you await a Task, code execution will jump out of the current function and yield control to its caller. Then at some point later in time after the awaited Task finishes, it will jump back to execute code after the await statement.
If you step over an await, the debugger will navigate you to the next line of code that is going to be executed. In case of await it is most likely not going to be the following line.
|
Q: Xamarin stepping over an awaitable method In a WPF app stepping over (F10) an awaitable method takes you to the next line, But in a xamarin android project it does not behave that way (it's as if I pressed F5) and I'm obliged to put a break point on the next line in order to debug properly -It's a pain in the ass-.
async Task SomeMethod()
{
await Task.Delay(1000); <--------- Stepping over this line leaves the function.
int x = 1; <--------- I have to add a breakpoint here.
}
Is it a bug or a feature?
PS: I'm using Visual Studio 2017.
A: This is exactly how await operator works. When you await a Task, code execution will jump out of the current function and yield control to its caller. Then at some point later in time after the awaited Task finishes, it will jump back to execute code after the await statement.
If you step over an await, the debugger will navigate you to the next line of code that is going to be executed. In case of await it is most likely not going to be the following line.
A: Make sure your method is in asynchronous. Tested mine & it is working from my side. Example below:-
Task.Run(async () =>
{
await Task.Delay(1000);
int x = 1;
});
or
async Task YourMethod
{
await Task.Delay(1000);
int x = 1;
}
A: This is, unfortunately, how visual studio debugging currently works. As the Task.Delay() method has been awaited, the programme flow goes back to the method that called YourMethod(). If that call was awaited, and the chain of calls to that method where all awaited back and so forth until it reaches the applications context. E.g. for Xamarin:
e.g.
1 class MyActivity : Activity
2 {
3 // This function is called by the Xamarin/Android systems and is not awaited.
4 // As it is marked as async, any awaited calls within will pause this function,
5 // and the application will continue with the function that called this function,
6 // returning to this function when the awaited call finishes.
7 // This means the UI is not blocked and is responsive to the user.
8 public async void OnCreate()
9 {
10 base.OnCreate();
11 await initialiseAsync(); // awaited - so will return to calling function
12 // while waiting for operation to complete.
13
14 // Code here will run after initialiseAsync() has finished.
15 }
16 public async Task initialiseAsync()
17 {
18 await YourMethod(); // awaited - so will return to Line 11
19 // while waiting for operation to complete.
20
21 // Code here will run after GetNamesAsync() has finished.
22 }
23 }
In pure windows applications, Visual Studio knows all about the application's context and knows that the underlying methods (programme lifecycle, window events, screen-redraws etc.) don't need to be debugged (and the source code is inaccessible). What you are probably seeing is the debugger pause for the 1000ms as there is no code to be debugged.
Xamarin puts an extra layer of code in, with things like the implementation of the base Activity class and all of the Android requirements. Visual Studio doesn't know to skip these and so tries to debug whatever code called the current awaited stack of methods. This will probably be something like the base Activity class's OnCreate() method - which you probably don't have access to the code for.
A: I believe this is a bug, and has been for many years. You can track progress here and chime in - maybe we can get it fixed!
https://github.com/xamarin/xamarin-android/issues/5554
|
stackoverflow
|
{
"language": "en",
"length": 606,
"provenance": "stackexchange_0000F.jsonl.gz:866241",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546657"
}
|
2f828db231aaf5902d81cc4f2c46090703591cad
|
Stackoverflow Stackexchange
Q: Should package-lock.json also be published? npm 5 introduced package-lock.json, of which the documentation is here.
It states that the file is intended to be included with version control, so anyone cloning your package and installing it will have the same dependency versions. In other words, you should not add it to your .gitignore file.
What it does not state is wether or not the file is intended to be included with a published package. This question could be rephrased as; should package-lock.json be included in .npmignore?
A: It cannot be published.
From the npm documentation:
One key detail about package-lock.json is that it cannot be published, and it will be ignored if found in any place other than the toplevel package
See package-lock.json documentation on docs.npmjs.com.
However, you should be commiting your package-lock.json to git as per the documentation.
This file is intended to be committed into source repositories
hence the common message presented by npm:
created a lockfile as package-lock.json. You should commit this file.
EDIT: A more detailed explanation can be found here.
|
Q: Should package-lock.json also be published? npm 5 introduced package-lock.json, of which the documentation is here.
It states that the file is intended to be included with version control, so anyone cloning your package and installing it will have the same dependency versions. In other words, you should not add it to your .gitignore file.
What it does not state is wether or not the file is intended to be included with a published package. This question could be rephrased as; should package-lock.json be included in .npmignore?
A: It cannot be published.
From the npm documentation:
One key detail about package-lock.json is that it cannot be published, and it will be ignored if found in any place other than the toplevel package
See package-lock.json documentation on docs.npmjs.com.
However, you should be commiting your package-lock.json to git as per the documentation.
This file is intended to be committed into source repositories
hence the common message presented by npm:
created a lockfile as package-lock.json. You should commit this file.
EDIT: A more detailed explanation can be found here.
|
stackoverflow
|
{
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:866258",
"question_score": "40",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546718"
}
|
864a45ecdaa22a9a4f619a53394d58917b448471
|
Stackoverflow Stackexchange
Q: Template Literals for Javascript Arrays When i try to put an array into a JavaScript array, a la,
> `${[1,2,3]}`
I get back this
'1,2,3'
and not
'[1,2,3]'
in the latest Node & Chrome.
I am missing something incredibly obvious, but need it spelled out to me nevertheless : )
A: You should use JSON.stringify(array)
It can help you to predict conversion to the string any arrays in this array.
const array = [["expected","test",1],0];
const arrayStr = JSON.stringify(array);
const templateResAsString = `${array}`; // expected,test,1,0
const templateResAsarray = `${arrayStr}`; // [["expected","test",1],0]
|
Q: Template Literals for Javascript Arrays When i try to put an array into a JavaScript array, a la,
> `${[1,2,3]}`
I get back this
'1,2,3'
and not
'[1,2,3]'
in the latest Node & Chrome.
I am missing something incredibly obvious, but need it spelled out to me nevertheless : )
A: You should use JSON.stringify(array)
It can help you to predict conversion to the string any arrays in this array.
const array = [["expected","test",1],0];
const arrayStr = JSON.stringify(array);
const templateResAsString = `${array}`; // expected,test,1,0
const templateResAsarray = `${arrayStr}`; // [["expected","test",1],0]
A: By the default, the values that are interpolated into a template literal are converted to their string representation.
For objects that means calling their .toString() method. The string representation of an array is simply a comma separated list of the strings representation of its elements, without leading [ or trailing ]:
console.log(
[1,2,3].toString()
);
A: Consolodiated list including above answers:
const arr = [1,2,3,4,5,'foo','bar','baz']
console.log(JSON.stringify(arr));
console.log(JSON.stringify(arr, null, 2));
console.log(arr.toString());
console.log(`${arr}`);
console.log(arr.join('\n'));
Good Luck...
|
stackoverflow
|
{
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:866267",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546760"
}
|
8dfe210319dbf671bb05e3ecc3b98aa235e18de0
|
Stackoverflow Stackexchange
Q: Ruby - set instance variable inside class method from string I have a class with a method register(key, val). I am trying to add key as a instance variable of the class and set it equal to val. Right now I'm trying to use self.instance_variable_set(':@' + key, val) but I'm getting this error:
in `instance_variable_set': `:@table' is not allowed as an instance variable name (NameError)
I am calling register('table', {'key' => 'value'})
Any idea how to do this properly? Thanks!
A: Remove : from your method.
self.instance_variable_set('@' + key, val)
Moreover, self can be redundant here. Try instance_variable_set('@' + key, val).
And prefer to use interpolation over concatenation. instance_variable_set("@#{key}", val)
|
Q: Ruby - set instance variable inside class method from string I have a class with a method register(key, val). I am trying to add key as a instance variable of the class and set it equal to val. Right now I'm trying to use self.instance_variable_set(':@' + key, val) but I'm getting this error:
in `instance_variable_set': `:@table' is not allowed as an instance variable name (NameError)
I am calling register('table', {'key' => 'value'})
Any idea how to do this properly? Thanks!
A: Remove : from your method.
self.instance_variable_set('@' + key, val)
Moreover, self can be redundant here. Try instance_variable_set('@' + key, val).
And prefer to use interpolation over concatenation. instance_variable_set("@#{key}", val)
|
stackoverflow
|
{
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:866281",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546784"
}
|
21df91a89bb09aab84e085ae42cdc5d4a2a2f8b5
|
Stackoverflow Stackexchange
Q: Unsupported method: BaseConfig.getApplicationIdSuffix() So I'm reading Android 6 for Programmers: An App-Driven Approach and the first two app examples I had no issues with the examples, this time the FlagQuiz example when loaded in Android Studio 3.0 Canary-3 I'm getting this error which isn't letting me build the project:
Error:Unsupported method: BaseConfig.getApplicationIdSuffix().
The version of Gradle you connect to does not support that method.
To resolve the problem you can change/upgrade the target version of Gradle you connect to.
Alternatively, you can ignore this exception and read other information from the model.
You can download the source from the book site here to test with the same code base that I'm testing from.
A: For Android Studio 3 I need to update two files to fix the error:--
1. app/build.gradle
buildscript {
repositories {
jcenter()
mavenCentral()
maven {
url 'https://maven.google.com/'
name 'Google'
}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
}
2. app/gradle/wrapper/gradle-wrapper.properties
distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip
|
Q: Unsupported method: BaseConfig.getApplicationIdSuffix() So I'm reading Android 6 for Programmers: An App-Driven Approach and the first two app examples I had no issues with the examples, this time the FlagQuiz example when loaded in Android Studio 3.0 Canary-3 I'm getting this error which isn't letting me build the project:
Error:Unsupported method: BaseConfig.getApplicationIdSuffix().
The version of Gradle you connect to does not support that method.
To resolve the problem you can change/upgrade the target version of Gradle you connect to.
Alternatively, you can ignore this exception and read other information from the model.
You can download the source from the book site here to test with the same code base that I'm testing from.
A: For Android Studio 3 I need to update two files to fix the error:--
1. app/build.gradle
buildscript {
repositories {
jcenter()
mavenCentral()
maven {
url 'https://maven.google.com/'
name 'Google'
}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
}
2. app/gradle/wrapper/gradle-wrapper.properties
distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip
A: Alright I figured out how to fix this issue.
*
*Open build.gradle and change the gradle version to the recommended version:
classpath 'com.android.tools.build:gradle:1.3.0' to
classpath 'com.android.tools.build:gradle:2.3.2'
*Hit 'Try Again'
*In the messages box it'll say 'Fix Gradle Wrapper and re-import project' Click that, since the minimum gradle version is 3.3
*A new error will popup and say The SDK Build Tools revision (23.0.1) is too low for project ':app'. Minimum required is 25.0.0 - Hit Update Build Tools version and sync project
*A window may popup that says Android Gradle Plugin Update recommended, just update from there.
Now the project should be runnable now on any of your android virtual devices.
A: You can do this by changing the gradle file.
build.gradle > change
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
gradle-wrapper.properties > update
distributionUrl=https://services.gradle.org/distributions/gradle-4.6-all.zip
A: First, open your application module build.gradle file.
Check the classpath according to your project dependency. If not change the version of this classpath.
from:
classpath 'com.android.tools.build:gradle:1.0.0'
To:
classpath 'com.android.tools.build:gradle:2.3.2'
or higher version according to your gradle of android studio.
If its still problem, then change buildToolsVersion:
From:
buildToolsVersion '21.0.0'
To:
buildToolsVersion '25.0.0'
then hit 'Try again' and gradle will automatically sync.
This will solve it.
A: In my case, Android Studio 3.0.1, I fixed the issue with the following two steps.
Step 1: Change Gradle plugin version in project-level build.gradle
buildscript {
repositories {
jcenter()
mavenCentral()
maven {
url 'https://maven.google.com/'
name 'Google'
}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
}
Step 2: Change gradle version
distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip
A: I also faced the same issue and got a solution very similar:
*
*Changing the classpath to classpath 'com.android.tools.build:gradle:2.3.2'
Image after adding the classpath
*A new message indicating to Update Build Tool version, so just click that message to update.
Update
A: Change your gradle version or update it
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
alt+enter and choose "replace with specific version".
A: If this ()Unsupported method: BaseConfig.getApplicationIdSuffix Android Project is old and you have updated Android Studio, what I did was simply CLOSE PROJECT and ran it again. It solved the issue for me. Did not add any dependencies or whatever as described by other answers.
A: I did the following to make this run on AS 3.5
*
*app/ build.gradle
apply plugin: 'com.android.application'
android {
compileSdkVersion 21
buildToolsVersion "25.0.0"
defaultConfig {
applicationId "com.example.android.mobileperf.render"
minSdkVersion 14
targetSdkVersion 21
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:21.0.0'
implementation 'com.squareup.picasso:picasso:2.71828'
}
*build.gradle
buildscript {
repositories {
jcenter()
mavenCentral()
maven {
url 'https://maven.google.com/'
name 'Google'
}
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
}
allprojects {
repositories {
jcenter()
google()
}
}
*gradle-wrapper.properties
distributionUrl=https://services.gradle.org/distributions/gradle-4.1-all.zip
|
stackoverflow
|
{
"language": "en",
"length": 605,
"provenance": "stackexchange_0000F.jsonl.gz:866303",
"question_score": "256",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546849"
}
|
e8c24f4bfefe7124cc3a905688235f5f3b7aa765
|
Stackoverflow Stackexchange
Q: Change privacy setting from application in ios swift 3 and come back on same state of application i developed a ios application for to capture photos.
i am trying to access camera for capture photos for that i updated in info.plist "Privacy - Camera Usage Description".
but when permission dialog box opened i am click on don't allow .
to change camera permission goto setting using
UIApplication.shared.open(URL(string:UIApplicationOpenSettingsURLString)!)
but after changing camera permission and click on back the application will be restrated.
how to come on same state on click back from setting
A: Its not possible directly
Still you want to achieve you have to manage accordingly.
When coming back from settings app, our app restarts again that is didFinishLaunchingWithOptions called it will start again.
You can use userdefault to store a bool or something and then push to your screen accordingly.
It Works For me.
|
Q: Change privacy setting from application in ios swift 3 and come back on same state of application i developed a ios application for to capture photos.
i am trying to access camera for capture photos for that i updated in info.plist "Privacy - Camera Usage Description".
but when permission dialog box opened i am click on don't allow .
to change camera permission goto setting using
UIApplication.shared.open(URL(string:UIApplicationOpenSettingsURLString)!)
but after changing camera permission and click on back the application will be restrated.
how to come on same state on click back from setting
A: Its not possible directly
Still you want to achieve you have to manage accordingly.
When coming back from settings app, our app restarts again that is didFinishLaunchingWithOptions called it will start again.
You can use userdefault to store a bool or something and then push to your screen accordingly.
It Works For me.
|
stackoverflow
|
{
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:866307",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546865"
}
|
35d4c62a64b9e2d1f8fc1c9039f2b4f9a31c988a
|
Stackoverflow Stackexchange
Q: mysql select distinct comma delimited values i have a mysql table
id cid c_name keywords
1 28 Stutgart BW,Mercedes,Porsche,Auto,Germany
2 34 Roma Sezar,A.S. Roma
3 28 München BMW,Oktober Fest,Auto,Germany
i need a query to show keywords from cid=28 but i want to see only 1 time a keyword, like (BW,Mercedes,Porsche,Auto,Bmw,Oktober Fest,Germany)
i dont want to list 2 time a keyword, how can resolve this problem?
i have tried distinct but could not get what i want
A: Split it before adding it all up with DISTINCT.Of course,better is to normalize your data(no more than 1 value in a column)
SELECT
GROUP_CONCAT( DISTINCT SUBSTRING_INDEX(SUBSTRING_INDEX(keywords, ',', n.digit+1), ',', -1)) keyword
FROM
t
INNER JOIN
(SELECT 0 digit UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6) n
ON LENGTH(REPLACE(keywords, ',' , '')) <= LENGTH(keywords)-n.digit
WHERE cid=28
See it working
|
Q: mysql select distinct comma delimited values i have a mysql table
id cid c_name keywords
1 28 Stutgart BW,Mercedes,Porsche,Auto,Germany
2 34 Roma Sezar,A.S. Roma
3 28 München BMW,Oktober Fest,Auto,Germany
i need a query to show keywords from cid=28 but i want to see only 1 time a keyword, like (BW,Mercedes,Porsche,Auto,Bmw,Oktober Fest,Germany)
i dont want to list 2 time a keyword, how can resolve this problem?
i have tried distinct but could not get what i want
A: Split it before adding it all up with DISTINCT.Of course,better is to normalize your data(no more than 1 value in a column)
SELECT
GROUP_CONCAT( DISTINCT SUBSTRING_INDEX(SUBSTRING_INDEX(keywords, ',', n.digit+1), ',', -1)) keyword
FROM
t
INNER JOIN
(SELECT 0 digit UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6) n
ON LENGTH(REPLACE(keywords, ',' , '')) <= LENGTH(keywords)-n.digit
WHERE cid=28
See it working
A: If you want to get a dynamic output then you can use the following query to get a distinct comma delimited values in a single record.
Note: here doesn't matter how many values are in comma delimited row & it's fetched distinct record from a number of rows based on your condition
$tag_list = DB::select('SELECT
TRIM(TRAILING "," FROM REPLACE(GROUP_CONCAT(DISTINCT keywords, ","),",,",",")) tag_list
FROM
test
WHERE id = 28');
$unique_tags = implode(',', array_unique(explode(",",$result[0]->search_tags)));
|
stackoverflow
|
{
"language": "en",
"length": 226,
"provenance": "stackexchange_0000F.jsonl.gz:866340",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44546968"
}
|
90d1007e52ab4d87c15f946d7d72544a13ec4f4d
|
Stackoverflow Stackexchange
Q: Java add list of specific class to list of java.lang.Object works with java 8 streams - why? public class Test {
static List<Object> listA = new ArrayList<>();
public static void main(final String[] args) {
final List<TestClass> listB = new ArrayList<>();
listB.add(new TestClass());
// not working
setListA(listB);
// working
setListA(listB.stream().collect(Collectors.toList()));
System.out.println();
}
private static void setListA(final List<Object> list) {
listA = list;
}
}
why does it work with streams and does not work for the simple set?
A: For the first case, it fails because List<TestClass> is not a subtype of List<Object>.1
For the second case, we have the following method declarations:
interface Stream<T> {
// ...
<R, A> R collect(Collector<? super T, A, R> collector)
}
and:
class Collectors {
// ...
public static <T> Collector<T, ?, List<T>> toList()
}
This allows Java to infer the generic type parameters from the context.2 In this case List<Object> is inferred for R, and Object for T.
Thus your code is equivalent to this:
Collector<Object, ?, List<Object>> tmpCollector = Collectors.toList();
List<Object> tmpList = listB.stream().collect(tmpCollector);
setListA(tmpList);
1. See e.g. here.
2. See e.g. here or here.
|
Q: Java add list of specific class to list of java.lang.Object works with java 8 streams - why? public class Test {
static List<Object> listA = new ArrayList<>();
public static void main(final String[] args) {
final List<TestClass> listB = new ArrayList<>();
listB.add(new TestClass());
// not working
setListA(listB);
// working
setListA(listB.stream().collect(Collectors.toList()));
System.out.println();
}
private static void setListA(final List<Object> list) {
listA = list;
}
}
why does it work with streams and does not work for the simple set?
A: For the first case, it fails because List<TestClass> is not a subtype of List<Object>.1
For the second case, we have the following method declarations:
interface Stream<T> {
// ...
<R, A> R collect(Collector<? super T, A, R> collector)
}
and:
class Collectors {
// ...
public static <T> Collector<T, ?, List<T>> toList()
}
This allows Java to infer the generic type parameters from the context.2 In this case List<Object> is inferred for R, and Object for T.
Thus your code is equivalent to this:
Collector<Object, ?, List<Object>> tmpCollector = Collectors.toList();
List<Object> tmpList = listB.stream().collect(tmpCollector);
setListA(tmpList);
1. See e.g. here.
2. See e.g. here or here.
A: This line
setListA(listB);
doesn't work because List in Java is invariant, meaning List<TestClass> doesn't extends List<Object> when TestClass extends Object. More details here
This line
setListA(listB.stream().collect(Collectors.toList()));
works because Java infer Object for Collector's generic type from this method signature setListA(final List<Object> list) and so you actually pass List<Object> there
A: the type parameters of Java Generic is invariance which means it can't be inherited as type parameters class hierarchy. The common parent of List<TestClass> and List<Object> is List<?>.
you can see detailed answer about java generic wildcard from kotlin & java. for example:
List<String> strings = new ArrayList<String>();
List<CharSequence> sequences = strings; // can't work
List<? extends CharSequence> parent1 = strings; // works fine
List<?> parent2 = strings; // works fine
// ^--- is equaivlent to List<? extends Object>
the streams approach is transform a List<TestClass> to List<Object>. if you want it works without transform a List to another List by stream. your methods signature should be as below, and the Collection#addAll also does it in java:
List<?> listA = new ArrayList<>();
private static void setListA(List<?> list) {
listA = list;
}
|
stackoverflow
|
{
"language": "en",
"length": 367,
"provenance": "stackexchange_0000F.jsonl.gz:866391",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547122"
}
|
37d8ec67a5789d68b93653b53642ff7b7a93263d
|
Stackoverflow Stackexchange
Q: Bitbucket cannot checkout repository in Sourcetree I recently changed my password and cannot checkout my Bitbucket repo in Sourcetree. I keep getting the following error:
git -c diff.mnemonicprefix=false -c core.quotepath=false -c credential.helper=sourcetree fetch origin
fatal: remote error: CAPTCHA required
Your Bitbucket account has been locked. To unlock it and log in again you must
solve a CAPTCHA. This is typically caused by too many attempts to login with an
incorrect password. The account lock prevents your SCM client from accessing
Bitbucket and its mirrors until it is solved, even if you enter your password
correctly.
If you are currently logged in to Bitbucket via a browser you may need to
logout and then log back in in order to solve the CAPTCHA.
Repository:
https://testuser.com/bitbucket/repo.git
I logged in and out many times, solved the CAPTCHAS and still get the same error. Do I need to update something on Bitbucket side? Sourcetree side? Or maybe a URL?
Thanks
A: on macOS, this has worked for me:
*
*Close SourceTree
*Open Keychain Access
*Search for "bitbucket" and remove any entries
*Go to bitbucket website, log-out and login again
*Open SourceTree and enter your password
|
Q: Bitbucket cannot checkout repository in Sourcetree I recently changed my password and cannot checkout my Bitbucket repo in Sourcetree. I keep getting the following error:
git -c diff.mnemonicprefix=false -c core.quotepath=false -c credential.helper=sourcetree fetch origin
fatal: remote error: CAPTCHA required
Your Bitbucket account has been locked. To unlock it and log in again you must
solve a CAPTCHA. This is typically caused by too many attempts to login with an
incorrect password. The account lock prevents your SCM client from accessing
Bitbucket and its mirrors until it is solved, even if you enter your password
correctly.
If you are currently logged in to Bitbucket via a browser you may need to
logout and then log back in in order to solve the CAPTCHA.
Repository:
https://testuser.com/bitbucket/repo.git
I logged in and out many times, solved the CAPTCHAS and still get the same error. Do I need to update something on Bitbucket side? Sourcetree side? Or maybe a URL?
Thanks
A: on macOS, this has worked for me:
*
*Close SourceTree
*Open Keychain Access
*Search for "bitbucket" and remove any entries
*Go to bitbucket website, log-out and login again
*Open SourceTree and enter your password
A: I solved it by setting up SSH for Git by following these steps:
https://confluence.atlassian.com/bitbucket/set-up-ssh-for-git-728138079.html
Repository:
ssh://git@testuser:repo.git
Previously I was using HTTPS protocol and it was causing errors.
|
stackoverflow
|
{
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:866403",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547157"
}
|
a34088b49454429204cd93f2d000e45aa49b5c24
|
Stackoverflow Stackexchange
Q: How to check unused plugins of activated plugins? I have one wordpress site and there are too many activated plugins.
So I want to know which plugins I can deactivate.
Is there any plugin for it or should I check one by one?
A: Definitely check one by one and regressively examine your site to ensure there aren't any issues. Plugins can do a million things both on the front and back ends, and risk is only multiplied when multiple plugins interact with each other. So, it's highly unlikely there will ever be a plugin just to say "OK" to you deactivating select ones. Way to much complex unpredictability.
|
Q: How to check unused plugins of activated plugins? I have one wordpress site and there are too many activated plugins.
So I want to know which plugins I can deactivate.
Is there any plugin for it or should I check one by one?
A: Definitely check one by one and regressively examine your site to ensure there aren't any issues. Plugins can do a million things both on the front and back ends, and risk is only multiplied when multiple plugins interact with each other. So, it's highly unlikely there will ever be a plugin just to say "OK" to you deactivating select ones. Way to much complex unpredictability.
|
stackoverflow
|
{
"language": "en",
"length": 110,
"provenance": "stackexchange_0000F.jsonl.gz:866418",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547205"
}
|
2d7965c9d3717bd38b584a0f3f2d9fd3b3d3cd9e
|
Stackoverflow Stackexchange
Q: Pandas: Selecting DataFrame rows between two dates (Datetime Index) I have a Pandas DataFrame with a DatetimeIndex and one column MSE Loss
the index is formatted as follows:
DatetimeIndex(['2015-07-16 07:14:41', '2015-07-16 07:14:48',
'2015-07-16 07:14:54', '2015-07-16 07:15:01',
'2015-07-16 07:15:07', '2015-07-16 07:15:14',...]
It includes several days.
I want to select all the rows (all times) of a particular days without specifically knowing the actual time intervals.
For example: Between 2015-07-16 07:00:00 and 2015-07-16 23:00:00
I tried the approach outlined here: here
But df[date_from:date_to]
outputs:
KeyError: Timestamp('2015-07-16 07:00:00')
So it wants exact indices. Furthermore, I don't have a datecolumn. Only an index with the dates.
What is the best way to select a whole day by just providing a date 2015-07-16 and then how could I select a specific time range within a particular day?
A: You can use truncate:
begin = pd.Timestamp('2015-07-16 07:00:00')
end = pd.Timestamp('2015-07-16 23:00:00')
df.truncate(before=begin, after=end)
|
Q: Pandas: Selecting DataFrame rows between two dates (Datetime Index) I have a Pandas DataFrame with a DatetimeIndex and one column MSE Loss
the index is formatted as follows:
DatetimeIndex(['2015-07-16 07:14:41', '2015-07-16 07:14:48',
'2015-07-16 07:14:54', '2015-07-16 07:15:01',
'2015-07-16 07:15:07', '2015-07-16 07:15:14',...]
It includes several days.
I want to select all the rows (all times) of a particular days without specifically knowing the actual time intervals.
For example: Between 2015-07-16 07:00:00 and 2015-07-16 23:00:00
I tried the approach outlined here: here
But df[date_from:date_to]
outputs:
KeyError: Timestamp('2015-07-16 07:00:00')
So it wants exact indices. Furthermore, I don't have a datecolumn. Only an index with the dates.
What is the best way to select a whole day by just providing a date 2015-07-16 and then how could I select a specific time range within a particular day?
A: You can use truncate:
begin = pd.Timestamp('2015-07-16 07:00:00')
end = pd.Timestamp('2015-07-16 23:00:00')
df.truncate(before=begin, after=end)
A: Option 1:
Sample df:
df
a
2015-07-16 07:14:41 12
2015-07-16 07:14:48 34
2015-07-16 07:14:54 65
2015-07-16 07:15:01 34
2015-07-16 07:15:07 23
2015-07-16 07:15:14 1
It looks like you're trying this without .loc (won't work without it):
df.loc['2015-07-16 07:00:00':'2015-07-16 23:00:00']
a
2015-07-16 07:14:41 12
2015-07-16 07:14:48 34
2015-07-16 07:14:54 65
2015-07-16 07:15:01 34
2015-07-16 07:15:07 23
2015-07-16 07:15:14 1
Option 2:
You can use boolean indexing on the index:
df[(df.index.get_level_values(0) >= '2015-07-16 07:00:00') & (df.index.get_level_values(0) <= '2015-07-16 23:00:00')]
A: You can use the panda function between_time.
the_timed_df=df["my_time_column"].between_time(date_from,date_to)
Should do what you want if I did not mess some detail up :-)
|
stackoverflow
|
{
"language": "en",
"length": 249,
"provenance": "stackexchange_0000F.jsonl.gz:866479",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547401"
}
|
6fea1f13608d8dedc21be0542cd82cb7c77c9d0c
|
Stackoverflow Stackexchange
Q: When creating a google actions the name not allowed and blocking me from Saving I am trying to create a Google Action and I am getting this error:
Your sample invocations are structured incorrectly. Make sure they all
include either your app name or pronunciation, and trigger your app.
Even if I set the name to
Dr. Detroit
and the pronunciation to
doctor detroit
I'm totally confused with this. any help is appreciated.
A: I also got the same error. It comes because of the fact that your sample invocations does not include the full name of your app/action. Try to make it exactly the same and it might work. I got mine working the same way.
|
Q: When creating a google actions the name not allowed and blocking me from Saving I am trying to create a Google Action and I am getting this error:
Your sample invocations are structured incorrectly. Make sure they all
include either your app name or pronunciation, and trigger your app.
Even if I set the name to
Dr. Detroit
and the pronunciation to
doctor detroit
I'm totally confused with this. any help is appreciated.
A: I also got the same error. It comes because of the fact that your sample invocations does not include the full name of your app/action. Try to make it exactly the same and it might work. I got mine working the same way.
A: It's because they all have to be of the form "Talk to <APP NAME>" or "Ask <APP NAME> about X", etc.
I agree that this is not made very clear, but this is what they mean by "… and trigger your app".
There's a list of allowed phrases here: https://developers.google.com/actions/localization/languages-locales
A: I also got the same error
our sample invocations are structured incorrectly. Make sure they all include either your app name or pronunciation and trigger your app.
What actually helped me is?
For the app Dr. Detroit
You can use invocation as Talk To Dr. Detroit.
This will fix the problem.
Talk To will help the app to get triggered.
Other Trigger Phrases are:
*
*"let me talk to $name"
*"I want to talk to $name"
*"can I talk to $name"
*"talk to $name"
*"let me speak to $name"
*"I want to speak to $name"
*"can I speak to $name"
*"speak to $name"
*"ask $name"
*"ask $name to ..."
From the reference doc:
https://developers.google.com/actions/localization/languages-locales
Here is the ref link:
Build Actions for the Google Assistant tutorial by codelabs
https://codelabs.developers.google.com/codelabs/actions-1/#0
Google documentation on Action on Google
https://developers.google.com/actions/console/publishing#linking_to_your_actions
A: This solved my same issue.
View Logs in simulator window. Expand log for Sending request with post data and make sure that inside rawInput element query attribute is matching with your sample invocation (overview -> app information -> details -> sample invocation)
Hope this can solve your issue.
A: Put the same name in the invocations as the full name of your application.
A: In this case, make sure that Sample innovation name and display name should be the same.
Then there will be no error encountered at the release stage.
|
stackoverflow
|
{
"language": "en",
"length": 398,
"provenance": "stackexchange_0000F.jsonl.gz:866481",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547406"
}
|
b2b22743393f95f33af45eaef8b2ee61aa64528f
|
Stackoverflow Stackexchange
Q: Disable/Prevent XUnit Warnings on build in .NET Core Test Project I have a project with several large test cases in it and the project takes about 2-3 minutes to build. I am suspicious that it has to do with this new warnings feature... for example:
warning xUnit2003: Do not use Assert.Equal()
warning xUnit2004: Do not use Assert.Equal() to check for boolean conditions.
It is doing this for thousands of lines...
It would be great if there was a way to disable this feature. Not sure if it has to do with the visual studio runner or xunit itself.
A: You could disable the warnings as follows:
*
*for a file
#pragma warning disable xUnit2003, xUnit2004
optionally restore them:
#pragma warning restore xUnit2003, xUnit2004
Or in your project properties:
|
Q: Disable/Prevent XUnit Warnings on build in .NET Core Test Project I have a project with several large test cases in it and the project takes about 2-3 minutes to build. I am suspicious that it has to do with this new warnings feature... for example:
warning xUnit2003: Do not use Assert.Equal()
warning xUnit2004: Do not use Assert.Equal() to check for boolean conditions.
It is doing this for thousands of lines...
It would be great if there was a way to disable this feature. Not sure if it has to do with the visual studio runner or xunit itself.
A: You could disable the warnings as follows:
*
*for a file
#pragma warning disable xUnit2003, xUnit2004
optionally restore them:
#pragma warning restore xUnit2003, xUnit2004
Or in your project properties:
|
stackoverflow
|
{
"language": "en",
"length": 129,
"provenance": "stackexchange_0000F.jsonl.gz:866497",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547443"
}
|
ca89332cd45251a649195dede3cd20f63bf6bd5f
|
Stackoverflow Stackexchange
Q: Filter one column in table in Google Data Studio I want to filter for unique events based on event category= Landing Page Links for dimensions=campaign and source. The entire table is already filtered for certain campaigns and sources. But on this particular column in the table I want event category filter in google data studio report. Is it possible?
I have tried creating calculated fields using case when but it is throwing error.
A: Not sure what you want to do but it you want to have different filters for 2 columns in one table, you can use data blending. If you have data source A and wants filter1 on one column and filter2 on the other, blend Source A with filter1 to SourceA with filter2. You just have to configure the Joint Keys properly to make it give the data you are expecting.
|
Q: Filter one column in table in Google Data Studio I want to filter for unique events based on event category= Landing Page Links for dimensions=campaign and source. The entire table is already filtered for certain campaigns and sources. But on this particular column in the table I want event category filter in google data studio report. Is it possible?
I have tried creating calculated fields using case when but it is throwing error.
A: Not sure what you want to do but it you want to have different filters for 2 columns in one table, you can use data blending. If you have data source A and wants filter1 on one column and filter2 on the other, blend Source A with filter1 to SourceA with filter2. You just have to configure the Joint Keys properly to make it give the data you are expecting.
|
stackoverflow
|
{
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:866503",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547462"
}
|
1e6294a6a4e2104b6483e333dc35fd004bbf2824
|
Stackoverflow Stackexchange
Q: Generating thumbnail of a pdf using PDF.js I would like to generate a thumbnail from a pdf file using PDF.js, but it isn't like anothers js that have just one file and all needed to include the js in your project is to write:
<script src="any.js"></script>
How can I use PDF.js in my project? I'm using PHP in backend.
A: I figured it out, the scale is not a parameter. The parameters are an object with field of scale that needed to be set.
function makeThumb(page) {
// draw page to fit into 96x96 canvas
var vp = page.getViewport({ scale: 1, });
var canvas = document.createElement("canvas");
var scalesize = 1;
canvas.width = vp.width * scalesize;
canvas.height = vp.height * scalesize;
var scale = Math.min(canvas.width / vp.width, canvas.height / vp.height);
console.log(vp.width, vp.height, scale);
return page.render({ canvasContext: canvas.getContext("2d"), viewport: page.getViewport({ scale: scale }) }).promise.then(function () {
return canvas;
});
}
|
Q: Generating thumbnail of a pdf using PDF.js I would like to generate a thumbnail from a pdf file using PDF.js, but it isn't like anothers js that have just one file and all needed to include the js in your project is to write:
<script src="any.js"></script>
How can I use PDF.js in my project? I'm using PHP in backend.
A: I figured it out, the scale is not a parameter. The parameters are an object with field of scale that needed to be set.
function makeThumb(page) {
// draw page to fit into 96x96 canvas
var vp = page.getViewport({ scale: 1, });
var canvas = document.createElement("canvas");
var scalesize = 1;
canvas.width = vp.width * scalesize;
canvas.height = vp.height * scalesize;
var scale = Math.min(canvas.width / vp.width, canvas.height / vp.height);
console.log(vp.width, vp.height, scale);
return page.render({ canvasContext: canvas.getContext("2d"), viewport: page.getViewport({ scale: scale }) }).promise.then(function () {
return canvas;
});
}
A: Based on helloworld example:
function makeThumb(page) {
// draw page to fit into 96x96 canvas
var vp = page.getViewport(1);
var canvas = document.createElement("canvas");
canvas.width = canvas.height = 96;
var scale = Math.min(canvas.width / vp.width, canvas.height / vp.height);
return page.render({canvasContext: canvas.getContext("2d"), viewport: page.getViewport(scale)}).promise.then(function () {
return canvas;
});
}
pdfjsLib.getDocument("https://raw.githubusercontent.com/mozilla/pdf.js/ba2edeae/web/compressed.tracemonkey-pldi-09.pdf").promise.then(function (doc) {
var pages = []; while (pages.length < doc.numPages) pages.push(pages.length + 1);
return Promise.all(pages.map(function (num) {
// create a div for each page and build a small canvas for it
var div = document.createElement("div");
document.body.appendChild(div);
return doc.getPage(num).then(makeThumb)
.then(function (canvas) {
div.appendChild(canvas);
});
}));
}).catch(console.error);
<script src="//npmcdn.com/pdfjs-dist/build/pdf.js"></script>
|
stackoverflow
|
{
"language": "en",
"length": 246,
"provenance": "stackexchange_0000F.jsonl.gz:866547",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547585"
}
|
78d40528d3f320ff4936cc9f75c30c5890201c95
|
Stackoverflow Stackexchange
Q: jQuery DataTables CSV export utf-8 I want to export my datatable to CSV. How to specify utf-8 encoding?
http://jsfiddle.net/ebRXw/3058/
Mamadou Diôf become Mamadou Diôf after the export
I've tryied to add "bom":true but still the problem
A: Put bom in csv extend option:
$(document).ready(function() {
var dataSet = [
[ "Tiger Nixon", "System Architect", "Edinburgh", "5421", "2011/04/25", "$320,800" ],
[ "Garrett Winters", "Accountant", "Tokyo", "8422", "2011/07/25", "$170,750" ],
[ "Mamadou Diôf", "Junior Technical Author", "San Francisco", "1562", "2009/01/12", "$86,000" ]
];
$('#example').DataTable( {
dom: 'Bfrtip',
data: dataSet,
columns: [
{ title: "Name" },
{ title: "Position" },
{ title: "Office" },
{ title: "Extn." },
{ title: "Start date" },
{ title: "Salary" }
],
buttons: [
{
extend: 'csv',
charset: 'UTF-8',
fieldSeparator: ';',
bom: true,
filename: 'CsvTest',
title: 'CsvTest'
}
]
});
});
|
Q: jQuery DataTables CSV export utf-8 I want to export my datatable to CSV. How to specify utf-8 encoding?
http://jsfiddle.net/ebRXw/3058/
Mamadou Diôf become Mamadou Diôf after the export
I've tryied to add "bom":true but still the problem
A: Put bom in csv extend option:
$(document).ready(function() {
var dataSet = [
[ "Tiger Nixon", "System Architect", "Edinburgh", "5421", "2011/04/25", "$320,800" ],
[ "Garrett Winters", "Accountant", "Tokyo", "8422", "2011/07/25", "$170,750" ],
[ "Mamadou Diôf", "Junior Technical Author", "San Francisco", "1562", "2009/01/12", "$86,000" ]
];
$('#example').DataTable( {
dom: 'Bfrtip',
data: dataSet,
columns: [
{ title: "Name" },
{ title: "Position" },
{ title: "Office" },
{ title: "Extn." },
{ title: "Start date" },
{ title: "Salary" }
],
buttons: [
{
extend: 'csv',
charset: 'UTF-8',
fieldSeparator: ';',
bom: true,
filename: 'CsvTest',
title: 'CsvTest'
}
]
});
});
|
stackoverflow
|
{
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:866560",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547622"
}
|
ff33e4fadce0921fd1dd96c6290a63e65fca2e9f
|
Stackoverflow Stackexchange
Q: How to stop Webpack minifying HTML? I have read the part of the Webpack documentation that explains why Webpack will minify HTML when setting a loader using the module.loaders syntax. But I can't find anywhere that explains how to stop this. I am using pug-loader and html-webpack-plugin to process my templates, but Webpack always spits them out with the HTML minified.
How can I stop this?
{
test: /\.pug$/,
use: 'pug-loader'
}
new HtmlWebpackPlugin({
title: 'Home',
filename: 'index.html',
template: './src/index.pug',
inject: 'head',
chunks: ['app'],
hash: true
}),
A: There's an option for html-webpack-plugin. minify: false. Have you tried adding that?
https://github.com/jantimon/html-webpack-plugin#configuration
|
Q: How to stop Webpack minifying HTML? I have read the part of the Webpack documentation that explains why Webpack will minify HTML when setting a loader using the module.loaders syntax. But I can't find anywhere that explains how to stop this. I am using pug-loader and html-webpack-plugin to process my templates, but Webpack always spits them out with the HTML minified.
How can I stop this?
{
test: /\.pug$/,
use: 'pug-loader'
}
new HtmlWebpackPlugin({
title: 'Home',
filename: 'index.html',
template: './src/index.pug',
inject: 'head',
chunks: ['app'],
hash: true
}),
A: There's an option for html-webpack-plugin. minify: false. Have you tried adding that?
https://github.com/jantimon/html-webpack-plugin#configuration
A: This issue may help you.
loaders: [
{
test: /\.pug$/,
exclude: /(node_modules)/,
loader: "pug-html",
query: {
pretty: true
}
}
]
A: Below command works for both npm run dev and npm run prod
module: {
rules: [{
test: /\.pug$/,
use: [
'html-loader?minimize=false',
'pug-html-loader?pretty=true'
]
}]
},
A: This works for me:
rules: [{
test: /\.pug$/,
use: [
{
loader: 'html-loader',
options: {
minimize: false
}
},
{
loader: 'pug-html-loader',
options: {
pretty: true
}
}
],
}],
|
stackoverflow
|
{
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:866570",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547649"
}
|
625d484c67b06cc7036a2b4202603923b2be58fc
|
Stackoverflow Stackexchange
Q: Python numpy equivalent of bandpower() from MATLAB I am using Python numpy's ftt.ftt() method to generate the fourier transform of a signal. However I want to calculate the bandpower over a range of frequencies. MATLAB has the method bandpower(x,fs,freqrange), I am trying to simulate specifically this syntax of the function. Source: https://www.mathworks.com/help/signal/ref/bandpower.html
It doesn't look like numpy has an equivalent function, but does anyone know a code snippet I can use to mimic bandpower(x,fs,freqrange)? It's not clear to me what exactly is going on behind the scenes in the function.
Note: If you know some non-Python pseudocode that would achieve the Matlab function, that would also be helpful.
A: The following snippet for computing the power in the band [fmin, fmax] worked for me:
import scipy
def bandpower(x, fs, fmin, fmax):
f, Pxx = scipy.signal.periodogram(x, fs=fs)
ind_min = scipy.argmax(f > fmin) - 1
ind_max = scipy.argmax(f > fmax) - 1
return scipy.trapz(Pxx[ind_min: ind_max], f[ind_min: ind_max])
|
Q: Python numpy equivalent of bandpower() from MATLAB I am using Python numpy's ftt.ftt() method to generate the fourier transform of a signal. However I want to calculate the bandpower over a range of frequencies. MATLAB has the method bandpower(x,fs,freqrange), I am trying to simulate specifically this syntax of the function. Source: https://www.mathworks.com/help/signal/ref/bandpower.html
It doesn't look like numpy has an equivalent function, but does anyone know a code snippet I can use to mimic bandpower(x,fs,freqrange)? It's not clear to me what exactly is going on behind the scenes in the function.
Note: If you know some non-Python pseudocode that would achieve the Matlab function, that would also be helpful.
A: The following snippet for computing the power in the band [fmin, fmax] worked for me:
import scipy
def bandpower(x, fs, fmin, fmax):
f, Pxx = scipy.signal.periodogram(x, fs=fs)
ind_min = scipy.argmax(f > fmin) - 1
ind_max = scipy.argmax(f > fmax) - 1
return scipy.trapz(Pxx[ind_min: ind_max], f[ind_min: ind_max])
A: def bandpower(x, fs, fmin, fmax, time):
# f, Pxx = scipy.signal.periodogram(x, fs=fs)
f, Pxx = signal.welch(x, fs, nperseg=fs*time)
ind_min = np.argmax(f > fmin) - 1
ind_max = np.argmax(f > fmax) - 1
return np.trapz(Pxx[ind_min: ind_max], f[ind_min: ind_max])
|
stackoverflow
|
{
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:866573",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547669"
}
|
706de63018e807e0dd55b9cbde7a15a076a45cd4
|
Stackoverflow Stackexchange
Q: Is there a way to send the verification email with the Firebase Admin SDK from my Node.js server? Is there a way to send the email verification email from my server ?
This is how it's done on the client:
authData.sendEmailVerification().then(function() {
Is there a way to do it on the server ?
A: I just came across the same problem as you. There is a function to generate the verification link using user's email address.
I used this function on an array of email addresses, then load the result to my mail automation API to send mails out. This function is weirdly not documented:
admin.auth().generateEmailVerificationLink([EMAIL_ADDRESS])
|
Q: Is there a way to send the verification email with the Firebase Admin SDK from my Node.js server? Is there a way to send the email verification email from my server ?
This is how it's done on the client:
authData.sendEmailVerification().then(function() {
Is there a way to do it on the server ?
A: I just came across the same problem as you. There is a function to generate the verification link using user's email address.
I used this function on an array of email addresses, then load the result to my mail automation API to send mails out. This function is weirdly not documented:
admin.auth().generateEmailVerificationLink([EMAIL_ADDRESS])
A: You can use :
axios.post('https://identitytoolkit.googleapis.com/v1/accounts:sendOobCode?key=[API_KEY]',
{ requestType: 'VERIFY_EMAIL', idToken: response.data.idToken }
)
https://firebase.google.com/docs/reference/rest/auth#section-send-email-verification
A: firebaser here
To my surprise there currently is no option to send verification email from within the Admin SDK. I'd recommend you file a feature request.
What you can do from the Admin SDK is update a user profile to mark their email as verified. This allows you to take control of the entire verification flow if you want to, finishing with a call to admin.auth().updateUser(...) (on Node.js, see the link for other supported languages).
|
stackoverflow
|
{
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:866574",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547671"
}
|
f7f7ef1f201b649e6438dce0d276bf869d129bd5
|
Stackoverflow Stackexchange
Q: .php files downloading instead of executing I am brand new to Apache and PHP.
I created a helloworld.php file and when I run it, the file downloads and the command is never executed.
I use Ubuntu 16.04 with Apache 2 and PHP 7.1 .
I tried completely reinstalling everything and also editing the apache2.conf file with this :
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
But this did not work.
I tried installing LAMP from : https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-16-04
But during the last step, I created an info.php file with these lines of code :
<?php
phpinfo();
?>
And the above lines were displayed as it is.
I tried finding the solution, but most of them mention a httpd.conf file, which I am not able to find anywhere.
I know this question has been asked multiple times before, but I didn't find a satisfactory solution.
A: Seems like apache PHP module is not installed. First of all install apache php module from following command:
sudo apt install php libapache2-mod-php
And then enable the module with following command:
sudo a2enmod php7.0
|
Q: .php files downloading instead of executing I am brand new to Apache and PHP.
I created a helloworld.php file and when I run it, the file downloads and the command is never executed.
I use Ubuntu 16.04 with Apache 2 and PHP 7.1 .
I tried completely reinstalling everything and also editing the apache2.conf file with this :
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
But this did not work.
I tried installing LAMP from : https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-16-04
But during the last step, I created an info.php file with these lines of code :
<?php
phpinfo();
?>
And the above lines were displayed as it is.
I tried finding the solution, but most of them mention a httpd.conf file, which I am not able to find anywhere.
I know this question has been asked multiple times before, but I didn't find a satisfactory solution.
A: Seems like apache PHP module is not installed. First of all install apache php module from following command:
sudo apt install php libapache2-mod-php
And then enable the module with following command:
sudo a2enmod php7.0
|
stackoverflow
|
{
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:866578",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547689"
}
|
af2e03ce3f5b0d124697b7162707c643087d2510
|
Stackoverflow Stackexchange
Q: How do I configure a parameterized git alias? I frequently run
git log -10 --author="<author name>" --grep="<story of interest>"
Is it possible to configure a git alias similar to
git by "<author name>" "<story of interest>" -10
that will accomplish the same thing?
The documentation makes no mention of parameters.
A: You can create alias as a shell command:
git config alias.agrep '!f() { git log -10 --author="$1" --grep="$2"; }; f'
Now call git agrep with 2 parameters: git agrep Matt test.
See GitAlias repo for dozens of useful aliases and examples. Full disclosure: I'm a contributor.
|
Q: How do I configure a parameterized git alias? I frequently run
git log -10 --author="<author name>" --grep="<story of interest>"
Is it possible to configure a git alias similar to
git by "<author name>" "<story of interest>" -10
that will accomplish the same thing?
The documentation makes no mention of parameters.
A: You can create alias as a shell command:
git config alias.agrep '!f() { git log -10 --author="$1" --grep="$2"; }; f'
Now call git agrep with 2 parameters: git agrep Matt test.
See GitAlias repo for dozens of useful aliases and examples. Full disclosure: I'm a contributor.
|
stackoverflow
|
{
"language": "en",
"length": 98,
"provenance": "stackexchange_0000F.jsonl.gz:866605",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547770"
}
|
3a7bc2840fe4050f6e61f7c9c57a83846006345c
|
Stackoverflow Stackexchange
Q: Convert JSON String to JAVA POJO using annotations in Spring Trying to parse the string JSON response to JAVA POJO using Jackson annotation in Spring boot application.
POJO
@JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY )
@JsonRootName(value = "data")
public class Data {
@JsonProperty("ticket")
private String ticket;
public String getTicket() {
return ticket;
}
public void setTicket(String ticket) {
this.ticket = ticket;
}
@Override
public String toString() {
return "\"data:\"{" + "\"ticket\"=\"" + ticket + "\"}";
}
}
Retrieving the ticket from third party API using postForEntity as follows
ResponseEntity<String> response = restTemplate.postForEntity(url, entity, String.class);
However, the third party API sends JSON in string format.
Want to convert this JSON string to JAVA POJO using Jackson annotations.
So that the call to API would become
ResponseEntity<Data> response = restTemplate.postForEntity(url, entity, Data.class);
Any help would be appreciated.
Thanks!
|
Q: Convert JSON String to JAVA POJO using annotations in Spring Trying to parse the string JSON response to JAVA POJO using Jackson annotation in Spring boot application.
POJO
@JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY )
@JsonRootName(value = "data")
public class Data {
@JsonProperty("ticket")
private String ticket;
public String getTicket() {
return ticket;
}
public void setTicket(String ticket) {
this.ticket = ticket;
}
@Override
public String toString() {
return "\"data:\"{" + "\"ticket\"=\"" + ticket + "\"}";
}
}
Retrieving the ticket from third party API using postForEntity as follows
ResponseEntity<String> response = restTemplate.postForEntity(url, entity, String.class);
However, the third party API sends JSON in string format.
Want to convert this JSON string to JAVA POJO using Jackson annotations.
So that the call to API would become
ResponseEntity<Data> response = restTemplate.postForEntity(url, entity, Data.class);
Any help would be appreciated.
Thanks!
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:866619",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547813"
}
|
8719a55f58230fc36ba94211fe1384f60e765334
|
Stackoverflow Stackexchange
Q: How to log Mocha/Chai HTTP requests Hi I am new to Mocha/Chai.
I am trying to test some HTTP requests. If would be nice if I could log the actual test request to debug it.
The code I am using looks something like
describe('Get token for super user', () => {
it('it should get a valid token set', (done) => {
let req = chai.request(app)
req
.get('/oauth/token')
.set('Content-Type','application/x-www-form-urlencoded')
.set('Authorization','Basic blah')
.field('grant_type', 'password')
.field('username', superUser)
.field('password', superPass)
.end((err, res) => {
console.log('*******' , req)
res.should.have.status(200)
done()
})
})
})
How would I log the request itself, I don't see a neat way of doing this from the API docs ?
A: Simplest way to get and log all info about request - response object:
chai.request('http://...')
.post('/endpoint')
.send('{"a":1}')
.end((err, response) => {
console.log(response);
done();
});
|
Q: How to log Mocha/Chai HTTP requests Hi I am new to Mocha/Chai.
I am trying to test some HTTP requests. If would be nice if I could log the actual test request to debug it.
The code I am using looks something like
describe('Get token for super user', () => {
it('it should get a valid token set', (done) => {
let req = chai.request(app)
req
.get('/oauth/token')
.set('Content-Type','application/x-www-form-urlencoded')
.set('Authorization','Basic blah')
.field('grant_type', 'password')
.field('username', superUser)
.field('password', superPass)
.end((err, res) => {
console.log('*******' , req)
res.should.have.status(200)
done()
})
})
})
How would I log the request itself, I don't see a neat way of doing this from the API docs ?
A: Simplest way to get and log all info about request - response object:
chai.request('http://...')
.post('/endpoint')
.send('{"a":1}')
.end((err, response) => {
console.log(response);
done();
});
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:866629",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547829"
}
|
cfe3119fbbe88cf51ae747a72002c885b3b48490
|
Stackoverflow Stackexchange
Q: Downloading private patches from Github When a PR is created in the private repos of my organization, I receive emails from Github with links such as http://github.com/<my org>/<project>/<PR #>.patch. I would like to download such links with curl; as-is, I get a 404, and I can't seem to find the right incantation with -H "Authorization: <oauth token>" to make it work.
A: You can use Github API to do this, get the pull request with this API :
GET /repos/:owner/:repo/pulls/:number
You can use a personal access token with repos scope to get the result for a private repo with the authorization header : -H 'Authorization: token YOUR_TOKEN'
Use commits comparison and pull request media types :
*
*patch : application/vnd.github.VERSION.patch
*diff : application/vnd.github.VERSION.diff
The curl requests are :
*
*request patch for PR #18 :
curl -H 'Authorization: token YOUR_TOKEN' \
-H 'Accept: application/vnd.github.VERSION.patch' \
https://api.github.com/repos/<my org>/<project>/pulls/18
*request diff for PR #18
curl -H 'Authorization: token YOUR_TOKEN' \
-H 'Accept: application/vnd.github.VERSION.diff' \
https://api.github.com/repos/<my org>/<project>/pulls/18
|
Q: Downloading private patches from Github When a PR is created in the private repos of my organization, I receive emails from Github with links such as http://github.com/<my org>/<project>/<PR #>.patch. I would like to download such links with curl; as-is, I get a 404, and I can't seem to find the right incantation with -H "Authorization: <oauth token>" to make it work.
A: You can use Github API to do this, get the pull request with this API :
GET /repos/:owner/:repo/pulls/:number
You can use a personal access token with repos scope to get the result for a private repo with the authorization header : -H 'Authorization: token YOUR_TOKEN'
Use commits comparison and pull request media types :
*
*patch : application/vnd.github.VERSION.patch
*diff : application/vnd.github.VERSION.diff
The curl requests are :
*
*request patch for PR #18 :
curl -H 'Authorization: token YOUR_TOKEN' \
-H 'Accept: application/vnd.github.VERSION.patch' \
https://api.github.com/repos/<my org>/<project>/pulls/18
*request diff for PR #18
curl -H 'Authorization: token YOUR_TOKEN' \
-H 'Accept: application/vnd.github.VERSION.diff' \
https://api.github.com/repos/<my org>/<project>/pulls/18
|
stackoverflow
|
{
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:866636",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547857"
}
|
7f2a098916e7daf04e8522e1a32d72f2403f1d5f
|
Stackoverflow Stackexchange
Q: iex- How to run an elixir project from outside the app folder I have some situation,
in which i need to run an elixir project, from outside the project file.
i.e. i have a folder code/example-app that contains the app (with the mix.exs and all the rest)
and i would like to run that app from code,
without cd-ing into example-app.
Is there a way to do that ?
A: You can specify the location of the mix.exs file using the MIX_EXS environment variable.
MIX_EXS=./code/example-app/mix.exs mix deps.get
You can read more about the environment variables that affect mix in the documentation.
Just note that if you try to execute a task that is defined inside of the project or one of its dependencies, it will not work.
|
Q: iex- How to run an elixir project from outside the app folder I have some situation,
in which i need to run an elixir project, from outside the project file.
i.e. i have a folder code/example-app that contains the app (with the mix.exs and all the rest)
and i would like to run that app from code,
without cd-ing into example-app.
Is there a way to do that ?
A: You can specify the location of the mix.exs file using the MIX_EXS environment variable.
MIX_EXS=./code/example-app/mix.exs mix deps.get
You can read more about the environment variables that affect mix in the documentation.
Just note that if you try to execute a task that is defined inside of the project or one of its dependencies, it will not work.
|
stackoverflow
|
{
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:866647",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547891"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.