content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Why is my ansible skipping over even though my client is centos?
The issue is that my playbook is not installing httpd, my client is a centos machine and my fqdn is set correctly
---
- name: Block Example
hosts: all
tasks:
- block:
- yum: name=httpd state=latest
- service: name=httpd state=started enabled=yes
when: ansible_os_family == 'RedHat'
- block:
- yum: name=vsftpd state=latest
- service: name=vsftpd state=started enabled=yes
when: ansible_fqdn == 'clien1.localdomain'
- block:
- yum: name=apache2 state=latest
- service: name=apache2 state=started enabled=yes
when: ansible_os_family == 'Debian'
I have tried to use the when clause yet it still skips over, is there another configuration I need to do. All the tasks skip over but 2/3 conditions are correct.
A:
It looks like you are using the block module incorrectly. The block module is used to group a set of tasks together and apply certain conditions to them as a group, rather than to each individual task within the block. It looks like you are trying to apply the when clause to each individual task within the block, which won't work as intended.
To fix this, you can move the when clauses from the individual tasks to the block itself, like this:
- name: Block Example
hosts: all
tasks:
- block:
- yum: name=httpd state=latest
- service: name=httpd state=started enabled=yes
when: ansible_os_family == 'RedHat'
- block:
- yum: name=vsftpd state=latest
- service: name=vsftpd state=started enabled=yes
when: ansible_fqdn == 'clien1.localdomain'
- block:
- yum: name=apache2 state=latest
- service: name=apache2 state=started enabled=yes
when: ansible_os_family == 'Debian'
This way, the tasks within each block will only be executed if the when clause for the block evaluates to true. You can also remove the when clauses from the individual tasks within the block, since they are already being handled by the when clause for the block.
Note that the block module is only available in Ansible 2.3 and later versions
| Why is my ansible skipping over even though my client is centos? | The issue is that my playbook is not installing httpd, my client is a centos machine and my fqdn is set correctly
---
- name: Block Example
hosts: all
tasks:
- block:
- yum: name=httpd state=latest
- service: name=httpd state=started enabled=yes
when: ansible_os_family == 'RedHat'
- block:
- yum: name=vsftpd state=latest
- service: name=vsftpd state=started enabled=yes
when: ansible_fqdn == 'clien1.localdomain'
- block:
- yum: name=apache2 state=latest
- service: name=apache2 state=started enabled=yes
when: ansible_os_family == 'Debian'
I have tried to use the when clause yet it still skips over, is there another configuration I need to do. All the tasks skip over but 2/3 conditions are correct.
| [
"It looks like you are using the block module incorrectly. The block module is used to group a set of tasks together and apply certain conditions to them as a group, rather than to each individual task within the block. It looks like you are trying to apply the when clause to each individual task within the block, which won't work as intended.\nTo fix this, you can move the when clauses from the individual tasks to the block itself, like this:\n- name: Block Example\n hosts: all\n tasks:\n - block:\n - yum: name=httpd state=latest\n - service: name=httpd state=started enabled=yes\n when: ansible_os_family == 'RedHat'\n\n - block:\n - yum: name=vsftpd state=latest\n - service: name=vsftpd state=started enabled=yes\n when: ansible_fqdn == 'clien1.localdomain'\n\n - block:\n - yum: name=apache2 state=latest\n - service: name=apache2 state=started enabled=yes\n when: ansible_os_family == 'Debian'\n\nThis way, the tasks within each block will only be executed if the when clause for the block evaluates to true. You can also remove the when clauses from the individual tasks within the block, since they are already being handled by the when clause for the block.\nNote that the block module is only available in Ansible 2.3 and later versions\n"
] | [
0
] | [] | [] | [
"ansible"
] | stackoverflow_0074675905_ansible.txt |
Q:
Not getting a fitted curve
I am not getting a fitted curve when I run this code. Instead, I am getting a random curve. Please help. Thanks in advance.
def cauchy(x, l, k, x1, a):
return l / (1+np.exp(-k*(x-x1))) + a
amplitude = [11, 9, 15, 18, 23, 62, 225, 537, 534, 251, 341, 8, 716, 653, 673]
distance = np.arange(0,15)
popt, pcov = curve_fit(cauchy, distance[5:], amplitude[5:], maxfev=1000000, bounds=((-10, -10, -10, 0), (30000, 3000, 30000, 100)),p0=[25000, 1, 0.3, 0])
ran = np.linspace(0, 15,1000) # for smoother plots more points
derivative = deriv(ran, *popt[:-1])
derivative_normalized = derivative/np.max(derivative)
fig, ax = plt.subplots(figsize = (8,5))
ax2 = ax.twinx()
ax.plot(distance,amplitude, 'o')
# ax.set_title('pinhole 300')
ax.set_xlabel('steps')
ax.set_ylabel('Amplitude of PS in counts', color = 'tab:blue')
ax.tick_params(axis="y", labelcolor='tab:blue')
ax.plot(ran, cauchy(ran, *popt), 'tab:blue', label='cauchy fit')
ax.set_xlim(0,18)
#second axis for better visibility
ax2.set_ylabel('Derivative of Cauchy step function, normalized', color = 'tab:green')
ax2.plot(ran, derivative_normalized, color ='tab:green')
ax2.tick_params(axis="y", labelcolor='tab:green')
plt.tight_layout()
plt.show()
A:
There could be a few reasons why you are not getting a fitted curve in this code. Some potential issues are:
The data provided for fitting the curve is not sufficient or is not appropriate for the Cauchy function. The data should be continuous and have a clear pattern for the curve fitting to work properly.
The initial values provided for the curve fitting parameters are not appropriate. The initial values should be chosen carefully based on the data and the expected shape of the curve.
The bounds provided for the curve fitting parameters are not appropriate. The bounds should be chosen carefully based on the data and the expected shape of the curve.
To fix these issues, you can try the following steps:
Check the data provided for fitting the curve and make sure it is continuous and has a clear pattern. If not, try using different data or transforming the data to make it more appropriate for the Cauchy function.
Choose the initial values for the curve fitting parameters more carefully. You can try using different initial values or using a method to estimate the initial values based on the data.
Choose the bounds for the curve fitting parameters more carefully. You can try using different bounds or using a method to estimate the bounds based on the data.
Overall, it is important to carefully choose the data, initial values, and bounds for the curve fitting to get a good fit.
| Not getting a fitted curve | I am not getting a fitted curve when I run this code. Instead, I am getting a random curve. Please help. Thanks in advance.
def cauchy(x, l, k, x1, a):
return l / (1+np.exp(-k*(x-x1))) + a
amplitude = [11, 9, 15, 18, 23, 62, 225, 537, 534, 251, 341, 8, 716, 653, 673]
distance = np.arange(0,15)
popt, pcov = curve_fit(cauchy, distance[5:], amplitude[5:], maxfev=1000000, bounds=((-10, -10, -10, 0), (30000, 3000, 30000, 100)),p0=[25000, 1, 0.3, 0])
ran = np.linspace(0, 15,1000) # for smoother plots more points
derivative = deriv(ran, *popt[:-1])
derivative_normalized = derivative/np.max(derivative)
fig, ax = plt.subplots(figsize = (8,5))
ax2 = ax.twinx()
ax.plot(distance,amplitude, 'o')
# ax.set_title('pinhole 300')
ax.set_xlabel('steps')
ax.set_ylabel('Amplitude of PS in counts', color = 'tab:blue')
ax.tick_params(axis="y", labelcolor='tab:blue')
ax.plot(ran, cauchy(ran, *popt), 'tab:blue', label='cauchy fit')
ax.set_xlim(0,18)
#second axis for better visibility
ax2.set_ylabel('Derivative of Cauchy step function, normalized', color = 'tab:green')
ax2.plot(ran, derivative_normalized, color ='tab:green')
ax2.tick_params(axis="y", labelcolor='tab:green')
plt.tight_layout()
plt.show()
| [
"There could be a few reasons why you are not getting a fitted curve in this code. Some potential issues are:\nThe data provided for fitting the curve is not sufficient or is not appropriate for the Cauchy function. The data should be continuous and have a clear pattern for the curve fitting to work properly.\nThe initial values provided for the curve fitting parameters are not appropriate. The initial values should be chosen carefully based on the data and the expected shape of the curve.\nThe bounds provided for the curve fitting parameters are not appropriate. The bounds should be chosen carefully based on the data and the expected shape of the curve.\nTo fix these issues, you can try the following steps:\nCheck the data provided for fitting the curve and make sure it is continuous and has a clear pattern. If not, try using different data or transforming the data to make it more appropriate for the Cauchy function.\nChoose the initial values for the curve fitting parameters more carefully. You can try using different initial values or using a method to estimate the initial values based on the data.\nChoose the bounds for the curve fitting parameters more carefully. You can try using different bounds or using a method to estimate the bounds based on the data.\nOverall, it is important to carefully choose the data, initial values, and bounds for the curve fitting to get a good fit.\n"
] | [
1
] | [] | [] | [
"curve_fitting",
"python"
] | stackoverflow_0074675927_curve_fitting_python.txt |
Q:
Reduction of time complexity of two nested for loop O(N*N)
I am trying to reduce time complexity of the following nested loop which currently has an O(N*N) time complexity:
for(i = 0; i < N-1; i++){
for(j = i+1; j < N; j++){
if((A[j] > B[i])){
ctr++; //counting elements satisfying the condition
}
}
}
A and B are just two vectors. I expect reducing O(N*N) to O(N). In addition, I doubt that if sorting A and B will help or not! Thanks!
A:
If you are comparing N*N/2 arbitrary elements, you need N*N/2 operations, i.e. O(n²). Sorting is O(N*log(N)) and can be done separately on both vectors. Sorting 2 vectors is still O(N*log(N)) because constant factors do not play a role in Big-O notation.
If the vectors are already sorted, you could break the loop after the first element which does not satisfy the condition.
| Reduction of time complexity of two nested for loop O(N*N) | I am trying to reduce time complexity of the following nested loop which currently has an O(N*N) time complexity:
for(i = 0; i < N-1; i++){
for(j = i+1; j < N; j++){
if((A[j] > B[i])){
ctr++; //counting elements satisfying the condition
}
}
}
A and B are just two vectors. I expect reducing O(N*N) to O(N). In addition, I doubt that if sorting A and B will help or not! Thanks!
| [
"If you are comparing N*N/2 arbitrary elements, you need N*N/2 operations, i.e. O(n²). Sorting is O(N*log(N)) and can be done separately on both vectors. Sorting 2 vectors is still O(N*log(N)) because constant factors do not play a role in Big-O notation.\nIf the vectors are already sorted, you could break the loop after the first element which does not satisfy the condition.\n"
] | [
0
] | [] | [] | [
"big_o",
"c++",
"performance",
"time_complexity"
] | stackoverflow_0074675647_big_o_c++_performance_time_complexity.txt |
Q:
Significance of "Fm"
When I run this query
SELECT TO_CHAR(0, 'Fm99.99') FROM DUAL;
I got 0. as output in oracle 10g.
But when I run
SELECT TO_CHAR(0, '99.99') FROM DUAL;
this gives .00 as output.
Please explain what is the significance of FM and how these two query behave differently
A:
fm signifies that you dont want the leading characters.
From the docs:
Fill mode. Oracle uses trailing blank characters and leading zeroes to
fill format elements to a constant width. The width is equal to the
display width of the largest element for the relevant format model
The FM modifier suppresses the above padding in the return value of
the TO_CHAR function.
A:
The fm (fill mode) operator removes extra spaces in months, days in the TO_CHAR function for the date.
| Significance of "Fm" | When I run this query
SELECT TO_CHAR(0, 'Fm99.99') FROM DUAL;
I got 0. as output in oracle 10g.
But when I run
SELECT TO_CHAR(0, '99.99') FROM DUAL;
this gives .00 as output.
Please explain what is the significance of FM and how these two query behave differently
| [
"fm signifies that you dont want the leading characters.\nFrom the docs:\n\nFill mode. Oracle uses trailing blank characters and leading zeroes to\n fill format elements to a constant width. The width is equal to the\n display width of the largest element for the relevant format model\nThe FM modifier suppresses the above padding in the return value of\n the TO_CHAR function.\n\n",
"The fm (fill mode) operator removes extra spaces in months, days in the TO_CHAR function for the date.\n"
] | [
1,
0
] | [] | [] | [
"oracle",
"sql"
] | stackoverflow_0029625009_oracle_sql.txt |
Q:
What's the difference between Git and Bit Bucket
What is Git and Bit Bucket? What's the difference between Git and Bit Bucket.Why do we use this software?
I wanna right answer of this question with proper explanation by the help of professional people.
A:
Git is a version control system that is used to track changes to files, such as source code files. It allows multiple developers to work on the same project simultaneously, and provides tools to merge the changes made by each developer into a single, cohesive codebase.
Bitbucket is a web-based hosting service for Git repositories. It provides features such as collaboration tools, code review, and issue tracking to help teams work together on projects.
The main difference between Git and Bitbucket is that Git is a version control system, while Bitbucket is a hosting service for Git repositories. Git is a tool that is used to manage and track changes to files, while Bitbucket provides a platform for teams to collaborate on code and manage Git repositories.
We use Git and Bitbucket to help teams work together on software development projects. The version control features of Git allow developers to collaborate on code and merge their changes, while the collaboration tools in Bitbucket make it easier for teams to communicate and work together effectively.
Overall, Git and Bitbucket are important tools for teams that are working on software development projects, as they help to facilitate collaboration and ensure that the codebase is well-organized and consistent.
| What's the difference between Git and Bit Bucket | What is Git and Bit Bucket? What's the difference between Git and Bit Bucket.Why do we use this software?
I wanna right answer of this question with proper explanation by the help of professional people.
| [
"Git is a version control system that is used to track changes to files, such as source code files. It allows multiple developers to work on the same project simultaneously, and provides tools to merge the changes made by each developer into a single, cohesive codebase.\nBitbucket is a web-based hosting service for Git repositories. It provides features such as collaboration tools, code review, and issue tracking to help teams work together on projects.\nThe main difference between Git and Bitbucket is that Git is a version control system, while Bitbucket is a hosting service for Git repositories. Git is a tool that is used to manage and track changes to files, while Bitbucket provides a platform for teams to collaborate on code and manage Git repositories.\nWe use Git and Bitbucket to help teams work together on software development projects. The version control features of Git allow developers to collaborate on code and merge their changes, while the collaboration tools in Bitbucket make it easier for teams to communicate and work together effectively.\nOverall, Git and Bitbucket are important tools for teams that are working on software development projects, as they help to facilitate collaboration and ensure that the codebase is well-organized and consistent.\n"
] | [
1
] | [] | [] | [
"bitbucket",
"git",
"github",
"github_actions"
] | stackoverflow_0074675916_bitbucket_git_github_github_actions.txt |
Q:
How to I get distinct combinations of one XRef column related to any value in the other XRef column
I need to select the count of unique value combinations of column B in an XRef table which is grouped by column A.
Consider the following schema and data, which represents a simple family structure. Each child has a father and mother:
TABLE Father
FatherID
Name
1
Alex
2
Bob
TABLE Mother
MotherID
Name
1
Alice
2
Barbara
TABLE Child
ChildID
FatherID
MotherID
Name
1
1 (Alex)
1 (Alice)
Adam
2
1 (Alex)
1 (Alice)
Billy
3
1 (Alex)
2 (Barbara)
Celine
4
2 (Bob)
2 (Barbara)
Derek
The distinct combinations of mothers for each father are:
Alex (Alice, Barbara)
Bob (Barbara)
In all there are two distinct combinations of mothers:
Alice, Barbara
Barbara
The query I want to write would return the count of those distinct combinations of mother, regardless of which father they are associated with:
UniqueMotherGroups
2
I was able to do this successfully using the STRING_AGG function, but it feels clunky. It also needs to operate over millions of rows and is quite slow at the moment. Is there a more idiomatic way to do this with set operations instead?
Here is my working example:
-- Drop pre-existing tables
DROP TABLE IF EXISTS dbo.Child;
DROP TABLE IF EXISTS dbo.Father;
DROP TABLE IF EXISTS dbo.Mother;
-- Create family tables.
CREATE TABLE dbo.Father
(
FatherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Father
ADD CONSTRAINT PK_Father
PRIMARY KEY CLUSTERED (FatherID);
ALTER TABLE dbo.Father SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Mother
(
MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Mother
ADD CONSTRAINT PK_Mother
PRIMARY KEY CLUSTERED (MotherID);
ALTER TABLE dbo.Mother SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Child
(
ChildID INT NOT NULL
, FatherID INT NOT NULL
, MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Child
ADD CONSTRAINT PK_Child
PRIMARY KEY CLUSTERED (ChildID);
CREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID, MotherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Father
FOREIGN KEY (FatherID)
REFERENCES dbo.Father (FatherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Mother
FOREIGN KEY (MotherID)
REFERENCES dbo.Mother (MotherID);
-- Insert two children with the same parents
INSERT INTO dbo.Father
(
FatherID
, Name
)
VALUES
(1, 'Alex')
, (2, 'Bob')
, (3, 'Charlie')
INSERT INTO dbo.Mother
(
MotherID
, Name
)
VALUES
(1, 'Alice')
, (2, 'Barbara');
INSERT INTO dbo.Child
(
ChildID
, FatherID
, MotherID
, Name
)
VALUES
(1, 1, 1, 'Adam')
, (2, 1, 1, 'Billy')
, (3, 1, 2, 'Celine')
, (4, 2, 2, 'Derek')
, (5, 3, 1, 'Eric');
-- CTE Gets distinct combinations of parents
WITH distinctParentCombinations (FatherID, MotherID)
AS (SELECT children.FatherID
, children.MotherID
FROM dbo.Child as children
GROUP BY children.FatherID
, children.MotherID
)
-- CTE Gets uses STRING_AGG to get unique combinations of mothers.
, motherGroups (Mothers)
AS (SELECT STRING_AGG(CONVERT(VARCHAR(MAX), distinctParentCombinations.MotherID), '-') WITHIN GROUP (ORDER BY distinctParentCombinations.MotherID) AS Mothers
FROM distinctParentCombinations
GROUP BY distinctParentCombinations.FatherID
)
-- Remove the COUNT function to see the actual combinations
SELECT COUNT(motherGroups.Mothers) AS UniqueMotherGroups
FROM motherGroups
-- Clean up the example
DROP TABLE IF EXISTS dbo.Child;
DROP TABLE IF EXISTS dbo.Father;
DROP TABLE IF EXISTS dbo.Mother;
A:
Thank you for posting such a comprehensive setup for the test data. However, I'm not running any CREATE/DROP statements against my DB so I converted those tables into table variables. Using your data, I came up with the following query. Just change the table names back to your dbo. names and you should be able to test in your environment. I basically concatenate every father/mother combo into a text string using FOR XML PATH. Then I count up all the distinct combos. If you find error in my logic, let me know. I'm just tossing this in the ring of possible solutions.
WITH distinctCombos AS (
SELECT DISTINCT
c.FatherID, c.MotherID
FROM @Child as c
) , motherComboCount AS (
SELECT
f.FatherID
, f.[Name]
, STUFF((
SELECT
',' + CAST(dc.MotherID as nvarchar)
FROM distinctCombos as dc
WHERE dc.FatherID = f.FatherID
ORDER BY dc.MotherID ASC
FOR XML PATH('')
),1,1,'') as motherList
FROM @Father as f
)
SELECT
COUNT(DISTINCT motherList) as UniqueMotherGroups
FROM motherComboCount as mcc
To save a bit of compute power, remove the STUFF function as it's not necessary for the comparison... it just makes the list nicer to look at if displaying... and I'm in the habit of using it.
It looks like the main differences between our methods is the use of FOR XML PATH vs STRING_AGG (I'm still on older SQL.) And I use DISTINCT twice instead of GROUP BY. If you have a larger dataset to test against, let me know how the 2 methods compare. I'm trying to think of a completely set-based method but I can't see it at the moment.
Update: Method 2.
Here's an idea I had using recursive CTEs to build the distinct mother combinations. In your example data, there are only 2 mothers per father. So there would be a total of 4 set-based queries performed (first CTE, 2 queries in the recursive CTE and the final SELECT).
WITH uniqueCombo as (
SELECT DISTINCT
c.FatherID
, c.MotherID
, ROW_NUMBER() OVER(PARTITION BY c.FatherID ORDER BY c.MotherID) as row_num
FROM @Child as c
), combos as (
SELECT
uc.FatherID
, uc.MotherID
, CAST(uc.MotherID as nvarchar(max)) as [path]
, row_num
, 0 as hierarchy_num
FROM uniqueCombo as uc
WHERE uc.row_num = 1
UNION ALL
SELECT
uc.FatherID
, uc.MotherID
, co.[path] + ',' + CAST(uc.MotherID as nvarchar(max))
, uc.row_num
, co.hierarchy_num + 1 as heirarchy_num
FROM uniqueCombo as uc
INNER JOIN combos as co
ON co.FatherID = uc.FatherID
--AND co.MotherID <> uc.MotherID
AND co.row_num + 1 = uc.row_num
), rankedCombos as (
SELECT
c.[path]
, ROW_NUMBER() OVER(PARTITION BY c.FatherID ORDER BY c.hierarchy_num DESC) as row_num
FROM combos as c
)
SELECT COUNT(DISTINCT rc.[path]) as UniqueMotherGroups
FROM rankedCombos as rc
WHERE rc.row_num = 1
Update 2:
I had another idea to use a PIVOT to transpose the records so that the FatherID would be in the left-most column with the MotherIDs as the column headers. To make that work with a dynamic list of MotherIDs, you have to use a dynamic PIVOT/dynamic SQL. (FatherID isn't really needed in the PIVOT so it's not included in the PIVOT query. I just had to describe what the goal is.) After the pivot, you can SELECT DISTINCT to get the unique mother combinations. Then the last SELECT is to get the COUNT. This one I ran in SQL Fiddle:
SQL Fiddle
MS SQL Server 2017 Schema Setup:
-- Create family tables.
CREATE TABLE dbo.Father
(
FatherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Father
ADD CONSTRAINT PK_Father
PRIMARY KEY CLUSTERED (FatherID);
ALTER TABLE dbo.Father SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Mother
(
MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Mother
ADD CONSTRAINT PK_Mother
PRIMARY KEY CLUSTERED (MotherID);
ALTER TABLE dbo.Mother SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Child
(
ChildID INT NOT NULL
, FatherID INT NOT NULL
, MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Child
ADD CONSTRAINT PK_Child
PRIMARY KEY CLUSTERED (ChildID);
CREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID, MotherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Father
FOREIGN KEY (FatherID)
REFERENCES dbo.Father (FatherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Mother
FOREIGN KEY (MotherID)
REFERENCES dbo.Mother (MotherID);
-- Insert two children with the same parents
INSERT INTO dbo.Father
(
FatherID
, Name
)
VALUES
(1, 'Alex')
, (2, 'Bob')
, (3, 'Charlie')
INSERT INTO dbo.Mother
(
MotherID
, Name
)
VALUES
(1, 'Alice')
, (2, 'Barbara');
INSERT INTO dbo.Child
(
ChildID
, FatherID
, MotherID
, Name
)
VALUES
(1, 1, 1, 'Adam')
, (2, 1, 1, 'Billy')
, (3, 1, 2, 'Celine')
, (4, 2, 2, 'Derek')
, (5, 3, 1, 'Eric');
Query 1:
DECLARE @cols AS nvarchar(MAX)
DECLARE @query AS nvarchar(MAX)
SET @cols = STUFF((
SELECT DISTINCT ',' + QUOTENAME(m.MotherID)
FROM Mother as m
FOR XML PATH(''))
,1,1,'')
SET @query = 'SELECT COUNT(mCount) as UniqueMotherGroups FROM (
SELECT DISTINCT ' + @cols + ', 1 as mCount FROM (
SELECT ' + @cols + '
FROM (
SELECT
c.FatherID
, c.MotherID
, 1 as mID
FROM child as c
) x
PIVOT
(
MAX(mID)
FOR MotherID in (' + @cols + ')
) p
) as m
) as mg'
--SELECT @query
Exec(@query)
Results:
| UniqueMotherGroups |
|--------------------|
| 3 |
UPDATE 3: Here's one other idea... create a results table with a unique constraint and with IGNORE_DUP_KEY=ON. You could use this in a function or stored procedure, or, setup a trigger to put the mother combinations into a unique-combo-holding-table. With IGNORE_DUP_KEY=ON, you can insert every combo and only the unique combos will remain. Then just do a count of all the rows.
--Create a table to hold the results:
CREATE TABLE results (
ChildID int not null
, UniqueCombos nvarchar(50) not null
PRIMARY KEY WITH (IGNORE_DUP_KEY = ON)
);
--Insert all combos into the results table. The unique constraint will cause only unique entries to remain.
INSERT INTO results (ChildID, UniqueCombos)
SELECT DISTINCT
c.ChildID
, (
SELECT ',' + CAST(MotherID as nvarchar(500))
FROM Child as c2
WHERE c2.ChildID = c.ChildID
ORDER BY c2.MotherID
FOR XML PATH('')
) as mother_combos
FROM Child as c
;
--Count up all the rows in the results table. Since these are all unique combinations, it should be fast to sum.
SELECT COUNT(*)
FROM results;
A:
If you accept to define a maximum number of mothers per father (here 7) you may try:
select count(*) as UniqueMotherGroups from (
select distinct m1, m2, m3, m4, m5, m6, m7 from (
select FatherID, row_number() over(partition by FatherID order by motherid) as rn, motherid
from (
select distinct FatherID, MotherID
from t_Child
)
)
pivot (
max(motherid) for rn in (1 as m1,2 as m2,3 as m3,4 as m4,5 as m5,6 as m6,7 as m7)
)
)
;
UNIQUEMOTHERGROUPS
------------------
3
A:
Here is one idea. Instead of using precise STRING_AGG you can calculate a hash / checksum of the group. You don't need to know the exact composition of the group, you just need to distinguish between different groups. Calculating of the hash may be faster than concatenating strings.
SQL Server has a function CHECKSUM_AGG
You can write your own hashing function with CLR.
Sample data
CREATE TABLE #Child
(
ChildID INT NOT NULL IDENTITY PRIMARY KEY
,FatherID INT NOT NULL
,MotherID INT NOT NULL
,Name VARCHAR(50) NOT NULL
);
INSERT INTO #Child
(
FatherID
,MotherID
,Name
)
VALUES
(1, 1, 'Adam')
,(1, 1, 'Billy')
,(1, 2, 'Celine')
,(2, 2, 'Derek')
,(3, 1, 'Eric')
,(4, 1, 'A')
,(4, 1, 'B')
,(4, 2, 'C')
,(4, 2, 'D')
,(4, 2, 'E')
,(5, 2, 'F')
,(6, 2, 'G')
;
Query
WITH
distinctParentCombinations
AS
(
SELECT
FatherID
,MotherID
FROM #Child
GROUP BY
FatherID
,MotherID
)
,motherGroups
AS
(
SELECT
FatherID
,CHECKSUM_AGG(MotherID) AS MotherGroup
FROM distinctParentCombinations
GROUP BY
FatherID
)
SELECT COUNT(DISTINCT MotherGroup) AS UniqueMotherGroups
FROM motherGroups
;
Result
+--------------------+
| UniqueMotherGroups |
+--------------------+
| 3 |
+--------------------+
You need to compare performance of all methods on your actual data.
Obviously, with CHECKSUM_AGG it is possible that some of the groups will be missed. There is a chance that two different groups will generate the same checksum.
You know better if this is acceptable.
A:
You have a great explanation and setup of your "problem case".
Your setup runs great in (for example) tempdb.
You have solved the problem in a nice way, and I don't think you can optimize it much further if you are going to calculate the mother groups every time you run the query.
There is one small mistake though; You must do a COUNT(DISTINCT motherGroups.Mothers) in your final count.
Since you mention milions of rows, I would suggest a slightly different approach.
If you aggregate the mother groups as soon as there is a change in the Child table, your query can run fast every time - even with millions of rows.
The kind of queries you want to run is seldom run only once, so it would be nice if the heavy work is already done.
Usually I prefer not to use triggers, because you get extra logic in a place where it could be hard to find and debug.
But sometimes triggers are nice to have, especially when you are not able to change the source code running on the clients.
So, my solution is to add a new column to the Father table and to create a trigger which (re)generates the mother group each time there is a change in the Child table.
This way, the hard aggregation work for each father is done as soon there is a change, and you don't have to aggregate when you run your query.
Since you already have millions of rows, we also have to update these existing rows.
I have used SQL Server 2019 for this solution.
*** The solution ***
Add 1 or 2 new columns to the Father table.
If you should add 1 or 2, it depends on what your preferences are:
"Do I want to see the aggregated mother groups for debugging purpose, or do I just trust the hashed values?"
Column 1: Hashed value of the aggregated mother group for each Father row.
The hashed value is VARBINARY and is at least 32 bytes, but we will use VARBINARY(1600):
1600 is less than 1700 which is the max nonclustered index size, so we will not have any problems indexing the column.
Since the hash value is in blocks of 32 bytes, a value of 1600 will cover a really, really, really long aggreated mother group.
-- Column 1: Hashed value of the aggregated mother group for each Father row.
alter table Father add MotherHash varbinary(1600)
create index IX_MotherHash on Father(MotherHash)
Column 2: This column is more optional, and depends on your preferences.
The column could be nice to have for debugging purpose if any questions are made about the result.
Which VARCHAR-length you should use depends on your real data.
MAX? Then you have no problems storing the mother groups, but you might have problems indexing it, since 1700 is the max for an unclustered index. But maybe you don't need to index it?
1700? Then you are able to index the column, but depending on your real data, will this cover the biggest mother group?
Why indexing? If you want to list the aggregated mother groups, it could be faster to read the index than the whole table.
As said; this depends on you (and your data). If we have no need to see the aggregated mother groups, then we don't need this column at all.
For this demo/solution we will add the column for debugging purpose, without any indexing.
-- Column 2: This column is more optional, and depends on your preferences.
alter table Father add MotherGroup varchar(MAX)
go
Create a trigger on the Child table.
It will handle all inserts, updates and deletes in the Child table.
create or alter trigger trIUD_Child on Child
after insert, update, delete
as
begin
set nocount on
-- Get all FatherIDs from the Inserted and Deleted table.
-- An ordinary Temp table is created with a clustered index to get SEEK performance later.
-- The table might also have more than 100 rows, where table variables are not recommended.
declare @numRowsInInsertedDeleted int
create table #rowsInInsertedDeleted(rowId int identity(1, 1), FatherID int)
create unique clustered index ix on #rowsInInsertedDeleted(rowId)
insert #rowsInInsertedDeleted(FatherID)
select distinct f.FatherID
from
(
select i.FatherID from inserted i
union all
select i.FatherID from deleted i
) f
select @numRowsInInsertedDeleted = max(rowId) from #rowsInInsertedDeleted
-- We have to loop each of the FatherIDs, since we might have several rows in the Inserted and Deleted tables.
declare @rowId int = 0
while (@rowId < @numRowsInInsertedDeleted)
begin
-- Get the father for the next row.
select @rowId += 1
declare @fatherId int
select @fatherId = r.FatherID
from #rowsInInsertedDeleted r
where r.rowId = @rowId
-- Aggregate the mothers for this father.
declare @motherGroup varchar(max) = ''
select @motherGroup += ',' + cast(c.MotherID as varchar)
from Child c
where c.FatherID = @fatherId
group by c.MotherID
order by c.MotherID
-- Update the father record.
-- Any empty strings are handled automatically, skip the leading ','.
update Father
set MotherGroup = substring(@motherGroup, 2, 2147483647),
MotherHash = HASHBYTES('SHA2_256', @motherGroup)
where FatherID = @fatherId
end
end
go
Updating existing rows
Since you already have millions of rows, we must aggregate the mother groups for these existing rows.
If you don't have the disk space for logging the update of the whole table, maybe you should take your database out of AG and switch to Simple recovery model for this task?
In that case you should also modify the update with a WHERE clause to update only parts of the table, and run the update for each part until the whole table is updated.
Example: update Child set FatherID = FatherID where FatherID between 1 and 1000000
Note: This update statement could block access to the Child table for other users.
-- Aggregate the mother groups for the existing rows.
-- This could takes minutes to complete, depending on the number of rows.
-- NOTE: This update statement could block access to the Child table for other users.
update Child set FatherID = FatherID
That's it!
You should now be able to quickly get the mother groups on existing rows, and also after future changes in the Child table.
-- Voila - now you can get the unique mother groups any time at a fast speed.
select count(distinct MotherHash) from Father
| How to I get distinct combinations of one XRef column related to any value in the other XRef column | I need to select the count of unique value combinations of column B in an XRef table which is grouped by column A.
Consider the following schema and data, which represents a simple family structure. Each child has a father and mother:
TABLE Father
FatherID
Name
1
Alex
2
Bob
TABLE Mother
MotherID
Name
1
Alice
2
Barbara
TABLE Child
ChildID
FatherID
MotherID
Name
1
1 (Alex)
1 (Alice)
Adam
2
1 (Alex)
1 (Alice)
Billy
3
1 (Alex)
2 (Barbara)
Celine
4
2 (Bob)
2 (Barbara)
Derek
The distinct combinations of mothers for each father are:
Alex (Alice, Barbara)
Bob (Barbara)
In all there are two distinct combinations of mothers:
Alice, Barbara
Barbara
The query I want to write would return the count of those distinct combinations of mother, regardless of which father they are associated with:
UniqueMotherGroups
2
I was able to do this successfully using the STRING_AGG function, but it feels clunky. It also needs to operate over millions of rows and is quite slow at the moment. Is there a more idiomatic way to do this with set operations instead?
Here is my working example:
-- Drop pre-existing tables
DROP TABLE IF EXISTS dbo.Child;
DROP TABLE IF EXISTS dbo.Father;
DROP TABLE IF EXISTS dbo.Mother;
-- Create family tables.
CREATE TABLE dbo.Father
(
FatherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Father
ADD CONSTRAINT PK_Father
PRIMARY KEY CLUSTERED (FatherID);
ALTER TABLE dbo.Father SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Mother
(
MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Mother
ADD CONSTRAINT PK_Mother
PRIMARY KEY CLUSTERED (MotherID);
ALTER TABLE dbo.Mother SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Child
(
ChildID INT NOT NULL
, FatherID INT NOT NULL
, MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Child
ADD CONSTRAINT PK_Child
PRIMARY KEY CLUSTERED (ChildID);
CREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID, MotherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Father
FOREIGN KEY (FatherID)
REFERENCES dbo.Father (FatherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Mother
FOREIGN KEY (MotherID)
REFERENCES dbo.Mother (MotherID);
-- Insert two children with the same parents
INSERT INTO dbo.Father
(
FatherID
, Name
)
VALUES
(1, 'Alex')
, (2, 'Bob')
, (3, 'Charlie')
INSERT INTO dbo.Mother
(
MotherID
, Name
)
VALUES
(1, 'Alice')
, (2, 'Barbara');
INSERT INTO dbo.Child
(
ChildID
, FatherID
, MotherID
, Name
)
VALUES
(1, 1, 1, 'Adam')
, (2, 1, 1, 'Billy')
, (3, 1, 2, 'Celine')
, (4, 2, 2, 'Derek')
, (5, 3, 1, 'Eric');
-- CTE Gets distinct combinations of parents
WITH distinctParentCombinations (FatherID, MotherID)
AS (SELECT children.FatherID
, children.MotherID
FROM dbo.Child as children
GROUP BY children.FatherID
, children.MotherID
)
-- CTE Gets uses STRING_AGG to get unique combinations of mothers.
, motherGroups (Mothers)
AS (SELECT STRING_AGG(CONVERT(VARCHAR(MAX), distinctParentCombinations.MotherID), '-') WITHIN GROUP (ORDER BY distinctParentCombinations.MotherID) AS Mothers
FROM distinctParentCombinations
GROUP BY distinctParentCombinations.FatherID
)
-- Remove the COUNT function to see the actual combinations
SELECT COUNT(motherGroups.Mothers) AS UniqueMotherGroups
FROM motherGroups
-- Clean up the example
DROP TABLE IF EXISTS dbo.Child;
DROP TABLE IF EXISTS dbo.Father;
DROP TABLE IF EXISTS dbo.Mother;
| [
"Thank you for posting such a comprehensive setup for the test data. However, I'm not running any CREATE/DROP statements against my DB so I converted those tables into table variables. Using your data, I came up with the following query. Just change the table names back to your dbo. names and you should be able to test in your environment. I basically concatenate every father/mother combo into a text string using FOR XML PATH. Then I count up all the distinct combos. If you find error in my logic, let me know. I'm just tossing this in the ring of possible solutions.\nWITH distinctCombos AS (\n SELECT DISTINCT\n c.FatherID, c.MotherID\n FROM @Child as c\n) , motherComboCount AS (\n SELECT\n f.FatherID\n , f.[Name]\n , STUFF((\n SELECT\n ',' + CAST(dc.MotherID as nvarchar)\n FROM distinctCombos as dc\n WHERE dc.FatherID = f.FatherID\n ORDER BY dc.MotherID ASC\n FOR XML PATH('')\n ),1,1,'') as motherList\n FROM @Father as f\n)\nSELECT\n COUNT(DISTINCT motherList) as UniqueMotherGroups\nFROM motherComboCount as mcc\n\nTo save a bit of compute power, remove the STUFF function as it's not necessary for the comparison... it just makes the list nicer to look at if displaying... and I'm in the habit of using it.\nIt looks like the main differences between our methods is the use of FOR XML PATH vs STRING_AGG (I'm still on older SQL.) And I use DISTINCT twice instead of GROUP BY. If you have a larger dataset to test against, let me know how the 2 methods compare. I'm trying to think of a completely set-based method but I can't see it at the moment.\nUpdate: Method 2.\nHere's an idea I had using recursive CTEs to build the distinct mother combinations. In your example data, there are only 2 mothers per father. So there would be a total of 4 set-based queries performed (first CTE, 2 queries in the recursive CTE and the final SELECT).\nWITH uniqueCombo as (\n SELECT DISTINCT\n c.FatherID\n , c.MotherID\n , ROW_NUMBER() OVER(PARTITION BY c.FatherID ORDER BY c.MotherID) as row_num\n FROM @Child as c\n), combos as (\n SELECT\n uc.FatherID\n , uc.MotherID\n , CAST(uc.MotherID as nvarchar(max)) as [path]\n , row_num\n , 0 as hierarchy_num\n FROM uniqueCombo as uc\n WHERE uc.row_num = 1\n\n UNION ALL\n\n SELECT\n uc.FatherID\n , uc.MotherID\n , co.[path] + ',' + CAST(uc.MotherID as nvarchar(max))\n , uc.row_num\n , co.hierarchy_num + 1 as heirarchy_num\n FROM uniqueCombo as uc\n INNER JOIN combos as co\n ON co.FatherID = uc.FatherID\n --AND co.MotherID <> uc.MotherID\n AND co.row_num + 1 = uc.row_num\n), rankedCombos as (\n SELECT \n c.[path]\n , ROW_NUMBER() OVER(PARTITION BY c.FatherID ORDER BY c.hierarchy_num DESC) as row_num\n FROM combos as c\n)\nSELECT COUNT(DISTINCT rc.[path]) as UniqueMotherGroups\nFROM rankedCombos as rc\nWHERE rc.row_num = 1\n\nUpdate 2:\nI had another idea to use a PIVOT to transpose the records so that the FatherID would be in the left-most column with the MotherIDs as the column headers. To make that work with a dynamic list of MotherIDs, you have to use a dynamic PIVOT/dynamic SQL. (FatherID isn't really needed in the PIVOT so it's not included in the PIVOT query. I just had to describe what the goal is.) After the pivot, you can SELECT DISTINCT to get the unique mother combinations. Then the last SELECT is to get the COUNT. This one I ran in SQL Fiddle:\nSQL Fiddle\nMS SQL Server 2017 Schema Setup:\n-- Create family tables.\n\nCREATE TABLE dbo.Father\n(\n FatherID INT NOT NULL\n , Name VARCHAR(50) NOT NULL\n);\n\nALTER TABLE dbo.Father\nADD CONSTRAINT PK_Father\n PRIMARY KEY CLUSTERED (FatherID);\n\nALTER TABLE dbo.Father SET (LOCK_ESCALATION = TABLE);\n\nCREATE TABLE dbo.Mother\n(\n MotherID INT NOT NULL\n , Name VARCHAR(50) NOT NULL\n);\n\nALTER TABLE dbo.Mother\nADD CONSTRAINT PK_Mother\n PRIMARY KEY CLUSTERED (MotherID);\n\nALTER TABLE dbo.Mother SET (LOCK_ESCALATION = TABLE);\n\nCREATE TABLE dbo.Child\n(\n ChildID INT NOT NULL\n , FatherID INT NOT NULL\n , MotherID INT NOT NULL\n , Name VARCHAR(50) NOT NULL\n);\n\nALTER TABLE dbo.Child\nADD CONSTRAINT PK_Child\n PRIMARY KEY CLUSTERED (ChildID);\n\nCREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID, MotherID);\n\nALTER TABLE dbo.Child\nADD CONSTRAINT FK_Child_Father\n FOREIGN KEY (FatherID)\n REFERENCES dbo.Father (FatherID);\n\nALTER TABLE dbo.Child\nADD CONSTRAINT FK_Child_Mother\n FOREIGN KEY (MotherID)\n REFERENCES dbo.Mother (MotherID);\n\n-- Insert two children with the same parents\n\nINSERT INTO dbo.Father\n(\n FatherID\n , Name\n)\nVALUES\n(1, 'Alex')\n, (2, 'Bob')\n, (3, 'Charlie')\n\nINSERT INTO dbo.Mother\n(\n MotherID\n , Name\n)\nVALUES\n(1, 'Alice')\n, (2, 'Barbara');\n\nINSERT INTO dbo.Child\n(\n ChildID\n , FatherID\n , MotherID\n , Name\n)\nVALUES\n(1, 1, 1, 'Adam')\n, (2, 1, 1, 'Billy')\n, (3, 1, 2, 'Celine')\n, (4, 2, 2, 'Derek')\n, (5, 3, 1, 'Eric');\n\nQuery 1:\nDECLARE @cols AS nvarchar(MAX)\nDECLARE @query AS nvarchar(MAX)\n\nSET @cols = STUFF((\n SELECT DISTINCT ',' + QUOTENAME(m.MotherID) \n FROM Mother as m\n FOR XML PATH('')) \n,1,1,'')\n \nSET @query = 'SELECT COUNT(mCount) as UniqueMotherGroups FROM (\n SELECT DISTINCT ' + @cols + ', 1 as mCount FROM (\n SELECT ' + @cols + ' \n FROM (\n SELECT\n c.FatherID\n , c.MotherID\n , 1 as mID\n FROM child as c\n ) x\n PIVOT \n (\n MAX(mID)\n FOR MotherID in (' + @cols + ')\n ) p\n ) as m\n) as mg'\n\n--SELECT @query\nExec(@query)\n\nResults:\n| UniqueMotherGroups |\n|--------------------|\n| 3 |\n\nUPDATE 3: Here's one other idea... create a results table with a unique constraint and with IGNORE_DUP_KEY=ON. You could use this in a function or stored procedure, or, setup a trigger to put the mother combinations into a unique-combo-holding-table. With IGNORE_DUP_KEY=ON, you can insert every combo and only the unique combos will remain. Then just do a count of all the rows.\n--Create a table to hold the results:\nCREATE TABLE results (\n ChildID int not null\n , UniqueCombos nvarchar(50) not null\n PRIMARY KEY WITH (IGNORE_DUP_KEY = ON)\n);\n\n--Insert all combos into the results table. The unique constraint will cause only unique entries to remain.\nINSERT INTO results (ChildID, UniqueCombos)\nSELECT DISTINCT\n c.ChildID\n , (\n SELECT ',' + CAST(MotherID as nvarchar(500))\n FROM Child as c2\n WHERE c2.ChildID = c.ChildID\n ORDER BY c2.MotherID\n FOR XML PATH('')\n ) as mother_combos\nFROM Child as c\n;\n\n--Count up all the rows in the results table. Since these are all unique combinations, it should be fast to sum.\nSELECT COUNT(*)\nFROM results;\n\n",
"If you accept to define a maximum number of mothers per father (here 7) you may try:\nselect count(*) as UniqueMotherGroups from (\nselect distinct m1, m2, m3, m4, m5, m6, m7 from (\n select FatherID, row_number() over(partition by FatherID order by motherid) as rn, motherid\n from (\n select distinct FatherID, MotherID\n from t_Child \n )\n)\npivot (\n max(motherid) for rn in (1 as m1,2 as m2,3 as m3,4 as m4,5 as m5,6 as m6,7 as m7)\n)\n)\n;\n\n\nUNIQUEMOTHERGROUPS\n------------------\n 3\n\n",
"Here is one idea. Instead of using precise STRING_AGG you can calculate a hash / checksum of the group. You don't need to know the exact composition of the group, you just need to distinguish between different groups. Calculating of the hash may be faster than concatenating strings.\nSQL Server has a function CHECKSUM_AGG\nYou can write your own hashing function with CLR.\nSample data\nCREATE TABLE #Child\n(\n ChildID INT NOT NULL IDENTITY PRIMARY KEY\n ,FatherID INT NOT NULL\n ,MotherID INT NOT NULL\n ,Name VARCHAR(50) NOT NULL\n);\n\nINSERT INTO #Child\n(\nFatherID\n,MotherID\n,Name\n)\nVALUES\n (1, 1, 'Adam')\n,(1, 1, 'Billy')\n,(1, 2, 'Celine')\n,(2, 2, 'Derek')\n,(3, 1, 'Eric')\n\n,(4, 1, 'A')\n,(4, 1, 'B')\n,(4, 2, 'C')\n,(4, 2, 'D')\n,(4, 2, 'E')\n\n,(5, 2, 'F')\n,(6, 2, 'G')\n;\n\nQuery\nWITH\ndistinctParentCombinations\nAS\n(\n SELECT\n FatherID\n ,MotherID\n FROM #Child\n GROUP BY\n FatherID\n ,MotherID\n)\n,motherGroups\nAS\n(\n SELECT\n FatherID\n ,CHECKSUM_AGG(MotherID) AS MotherGroup\n FROM distinctParentCombinations\n GROUP BY\n FatherID\n)\nSELECT COUNT(DISTINCT MotherGroup) AS UniqueMotherGroups\nFROM motherGroups\n;\n\nResult\n+--------------------+\n| UniqueMotherGroups |\n+--------------------+\n| 3 |\n+--------------------+\n\nYou need to compare performance of all methods on your actual data.\nObviously, with CHECKSUM_AGG it is possible that some of the groups will be missed. There is a chance that two different groups will generate the same checksum.\nYou know better if this is acceptable.\n",
"You have a great explanation and setup of your \"problem case\".\nYour setup runs great in (for example) tempdb.\nYou have solved the problem in a nice way, and I don't think you can optimize it much further if you are going to calculate the mother groups every time you run the query.\nThere is one small mistake though; You must do a COUNT(DISTINCT motherGroups.Mothers) in your final count.\nSince you mention milions of rows, I would suggest a slightly different approach.\nIf you aggregate the mother groups as soon as there is a change in the Child table, your query can run fast every time - even with millions of rows.\nThe kind of queries you want to run is seldom run only once, so it would be nice if the heavy work is already done.\nUsually I prefer not to use triggers, because you get extra logic in a place where it could be hard to find and debug.\nBut sometimes triggers are nice to have, especially when you are not able to change the source code running on the clients.\nSo, my solution is to add a new column to the Father table and to create a trigger which (re)generates the mother group each time there is a change in the Child table.\nThis way, the hard aggregation work for each father is done as soon there is a change, and you don't have to aggregate when you run your query.\nSince you already have millions of rows, we also have to update these existing rows.\nI have used SQL Server 2019 for this solution.\n*** The solution ***\nAdd 1 or 2 new columns to the Father table.\nIf you should add 1 or 2, it depends on what your preferences are:\n\"Do I want to see the aggregated mother groups for debugging purpose, or do I just trust the hashed values?\"\nColumn 1: Hashed value of the aggregated mother group for each Father row.\nThe hashed value is VARBINARY and is at least 32 bytes, but we will use VARBINARY(1600):\n\n1600 is less than 1700 which is the max nonclustered index size, so we will not have any problems indexing the column.\nSince the hash value is in blocks of 32 bytes, a value of 1600 will cover a really, really, really long aggreated mother group.\n\n-- Column 1: Hashed value of the aggregated mother group for each Father row.\nalter table Father add MotherHash varbinary(1600)\ncreate index IX_MotherHash on Father(MotherHash) \n\nColumn 2: This column is more optional, and depends on your preferences.\nThe column could be nice to have for debugging purpose if any questions are made about the result.\nWhich VARCHAR-length you should use depends on your real data.\n\nMAX? Then you have no problems storing the mother groups, but you might have problems indexing it, since 1700 is the max for an unclustered index. But maybe you don't need to index it?\n1700? Then you are able to index the column, but depending on your real data, will this cover the biggest mother group?\n\nWhy indexing? If you want to list the aggregated mother groups, it could be faster to read the index than the whole table.\nAs said; this depends on you (and your data). If we have no need to see the aggregated mother groups, then we don't need this column at all.\nFor this demo/solution we will add the column for debugging purpose, without any indexing.\n-- Column 2: This column is more optional, and depends on your preferences.\nalter table Father add MotherGroup varchar(MAX)\ngo\n\nCreate a trigger on the Child table.\nIt will handle all inserts, updates and deletes in the Child table.\ncreate or alter trigger trIUD_Child on Child\nafter insert, update, delete\nas\nbegin\n set nocount on\n -- Get all FatherIDs from the Inserted and Deleted table.\n -- An ordinary Temp table is created with a clustered index to get SEEK performance later.\n -- The table might also have more than 100 rows, where table variables are not recommended.\n declare @numRowsInInsertedDeleted int\n create table #rowsInInsertedDeleted(rowId int identity(1, 1), FatherID int)\n create unique clustered index ix on #rowsInInsertedDeleted(rowId)\n insert #rowsInInsertedDeleted(FatherID)\n select distinct f.FatherID\n from\n (\n select i.FatherID from inserted i\n union all\n select i.FatherID from deleted i\n ) f\n select @numRowsInInsertedDeleted = max(rowId) from #rowsInInsertedDeleted\n\n -- We have to loop each of the FatherIDs, since we might have several rows in the Inserted and Deleted tables.\n declare @rowId int = 0\n while (@rowId < @numRowsInInsertedDeleted)\n begin\n -- Get the father for the next row.\n select @rowId += 1\n declare @fatherId int\n select @fatherId = r.FatherID\n from #rowsInInsertedDeleted r\n where r.rowId = @rowId\n \n -- Aggregate the mothers for this father.\n declare @motherGroup varchar(max) = ''\n select @motherGroup += ',' + cast(c.MotherID as varchar)\n from Child c\n where c.FatherID = @fatherId\n group by c.MotherID \n order by c.MotherID\n\n -- Update the father record.\n -- Any empty strings are handled automatically, skip the leading ','.\n update Father\n set MotherGroup = substring(@motherGroup, 2, 2147483647),\n MotherHash = HASHBYTES('SHA2_256', @motherGroup)\n where FatherID = @fatherId\n end\nend\ngo\n\nUpdating existing rows\nSince you already have millions of rows, we must aggregate the mother groups for these existing rows.\nIf you don't have the disk space for logging the update of the whole table, maybe you should take your database out of AG and switch to Simple recovery model for this task? \nIn that case you should also modify the update with a WHERE clause to update only parts of the table, and run the update for each part until the whole table is updated.\nExample: update Child set FatherID = FatherID where FatherID between 1 and 1000000\nNote: This update statement could block access to the Child table for other users.\n-- Aggregate the mother groups for the existing rows.\n-- This could takes minutes to complete, depending on the number of rows.\n-- NOTE: This update statement could block access to the Child table for other users.\nupdate Child set FatherID = FatherID\n\nThat's it!\nYou should now be able to quickly get the mother groups on existing rows, and also after future changes in the Child table.\n-- Voila - now you can get the unique mother groups any time at a fast speed.\nselect count(distinct MotherHash) from Father\n\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"sql",
"sql_server",
"tsql"
] | stackoverflow_0074480257_sql_sql_server_tsql.txt |
Q:
Wrap interpretation of statistical test in function
I want to create a function in R that interprets the result of Goldfeld-Quandt test.
library(lmtest)
interp <- function(model, order, data, fraction){
test_result = gqtest(model, order.by, data, fraction)
*some part of function here that gets the result and gets the interpretation*
}
Basically, it's just an automation of the interpretation of the Goldfeld-Quandt test.
If I do it manually, I can always interpret it. But I have to create a function and I can't think of anything to do such a function.
a sample result of a Goldfeld-Quandt test is this
Goldfeld-Quandt test
data: data
GQ = 0.3843, df1 = 41, df2 = 40, p-value = 0.8921
alternative hypothesis: variance increases from segment 1 to 2
I want to scan through this result, my target is the p-value. How do I do that? can I set the result to a variable say test_result and convert it to string? Then scan through it?
A:
How about this:
library(lmtest)
x <- rep(c(-1,1), 50)
## generate heteroskedastic and homoskedastic disturbances
err1 <- c(rnorm(50, sd=1), rnorm(50, sd=2))
err2 <- rnorm(100)
## generate a linear relationship
y1 <- 1 + x + err1
y2 <- 1 + x + err2
## perform Goldfeld-Quandt test
mod <- lm(y1 ~ x)
gqfun <- function(model, alpha=.05, ...){
test_result <- gqtest(model, ...)
print(test_result)
cat("Interpretation:\n")
if(test_result$p.value >= alpha){
cat("text if no significant result.\n")
}
if(test_result$p.value < alpha & test_result$alternative == "variance increases from segment 1 to 2"){
cat("text for significant result with greater than alternative hypohtesis.\n")
}
if(test_result$p.value < alpha & test_result$alternative == "variance changes from segment 1 to 2"){
cat("text for significant result with two-sided alternative hypohtesis.\n")
}
if(test_result$p.value < alpha & test_result$alternative == "variance decreases from segment 1 to 2"){
cat("text for significant result with less than alternative hypohtesis.\n")
}
}
gqfun(mod)
#>
#> Goldfeld-Quandt test
#>
#> data: model
#> GQ = 4.8726, df1 = 48, df2 = 48, p-value = 9.552e-08
#> alternative hypothesis: variance increases from segment 1 to 2
#>
#> Interpretation:
#> text for significant result with greater than alternative hypohtesis.
Created on 2022-12-04 by the reprex package (v2.0.1)
| Wrap interpretation of statistical test in function | I want to create a function in R that interprets the result of Goldfeld-Quandt test.
library(lmtest)
interp <- function(model, order, data, fraction){
test_result = gqtest(model, order.by, data, fraction)
*some part of function here that gets the result and gets the interpretation*
}
Basically, it's just an automation of the interpretation of the Goldfeld-Quandt test.
If I do it manually, I can always interpret it. But I have to create a function and I can't think of anything to do such a function.
a sample result of a Goldfeld-Quandt test is this
Goldfeld-Quandt test
data: data
GQ = 0.3843, df1 = 41, df2 = 40, p-value = 0.8921
alternative hypothesis: variance increases from segment 1 to 2
I want to scan through this result, my target is the p-value. How do I do that? can I set the result to a variable say test_result and convert it to string? Then scan through it?
| [
"How about this:\nlibrary(lmtest)\n\nx <- rep(c(-1,1), 50)\n## generate heteroskedastic and homoskedastic disturbances\nerr1 <- c(rnorm(50, sd=1), rnorm(50, sd=2))\nerr2 <- rnorm(100)\n## generate a linear relationship\ny1 <- 1 + x + err1\ny2 <- 1 + x + err2\n## perform Goldfeld-Quandt test\n\nmod <- lm(y1 ~ x)\n\n\ngqfun <- function(model, alpha=.05, ...){\n test_result <- gqtest(model, ...)\n print(test_result)\n cat(\"Interpretation:\\n\")\n if(test_result$p.value >= alpha){\n cat(\"text if no significant result.\\n\")\n }\n if(test_result$p.value < alpha & test_result$alternative == \"variance increases from segment 1 to 2\"){\n cat(\"text for significant result with greater than alternative hypohtesis.\\n\")\n }\n if(test_result$p.value < alpha & test_result$alternative == \"variance changes from segment 1 to 2\"){\n cat(\"text for significant result with two-sided alternative hypohtesis.\\n\")\n }\n if(test_result$p.value < alpha & test_result$alternative == \"variance decreases from segment 1 to 2\"){\n cat(\"text for significant result with less than alternative hypohtesis.\\n\")\n }\n}\ngqfun(mod)\n#> \n#> Goldfeld-Quandt test\n#> \n#> data: model\n#> GQ = 4.8726, df1 = 48, df2 = 48, p-value = 9.552e-08\n#> alternative hypothesis: variance increases from segment 1 to 2\n#> \n#> Interpretation:\n#> text for significant result with greater than alternative hypohtesis.\n\nCreated on 2022-12-04 by the reprex package (v2.0.1)\n"
] | [
0
] | [] | [] | [
"function",
"r"
] | stackoverflow_0074675455_function_r.txt |
Q:
Cannot import name 'win32api' from 'PyInstaller.compat'
I am trying to run pyinstaller in msys2 in Windows7. However, I am getting following error:
ImportError: cannot import name 'win32api' from 'PyInstaller.compat' (/usr/lib/python3.10/site-packages/PyInstaller/compat.py)
I checked on the internet and found possible solution: pip install pypiwin32. However, it is giving following error:
$ pip install pypiwin32
Collecting pypiwin32
Using cached pypiwin32-223-py3-none-any.whl (1.7 kB)
Using cached pypiwin32-219.zip (4.8 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-1sy9gva9/pypiwin32_8571824ef2674f0a8bcf87a3647a5381/setup.py", line 121
print "Building pywin32", pywin32_version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Where is the problem and how can it be solved?
A:
It looks like the error is coming from the package itself, specifically with the syntax used in the code. The error message indicates that there is a missing parenthesis in a print statement, and suggests that you should use print() instead of just print.
To solve the issue, you could try installing an older version of the package that doesn't have this syntax error. You can do this by using the following command:
$ pip install pypiwin32==<older version number>
For example:
$ pip install pypiwin32==219
If this doesn't work, you could try downloading the package manually from PyPI (https://pypi.org/project/pypiwin32/) and installing it manually with the following command:
$ python setup.py install
Alternatively, you could try using a different package that provides the same functionality, such as pywin32 (https://pypi.org/project/pywin32/). You can install this package using the following command:
$ pip install pywin32
Once you have installed the package, you should be able to import win32api from PyInstaller.compat without any issues.
| Cannot import name 'win32api' from 'PyInstaller.compat' | I am trying to run pyinstaller in msys2 in Windows7. However, I am getting following error:
ImportError: cannot import name 'win32api' from 'PyInstaller.compat' (/usr/lib/python3.10/site-packages/PyInstaller/compat.py)
I checked on the internet and found possible solution: pip install pypiwin32. However, it is giving following error:
$ pip install pypiwin32
Collecting pypiwin32
Using cached pypiwin32-223-py3-none-any.whl (1.7 kB)
Using cached pypiwin32-219.zip (4.8 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-1sy9gva9/pypiwin32_8571824ef2674f0a8bcf87a3647a5381/setup.py", line 121
print "Building pywin32", pywin32_version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Where is the problem and how can it be solved?
| [
"It looks like the error is coming from the package itself, specifically with the syntax used in the code. The error message indicates that there is a missing parenthesis in a print statement, and suggests that you should use print() instead of just print.\nTo solve the issue, you could try installing an older version of the package that doesn't have this syntax error. You can do this by using the following command:\n$ pip install pypiwin32==<older version number>\n\nFor example:\n$ pip install pypiwin32==219\n\nIf this doesn't work, you could try downloading the package manually from PyPI (https://pypi.org/project/pypiwin32/) and installing it manually with the following command:\n$ python setup.py install\n\nAlternatively, you could try using a different package that provides the same functionality, such as pywin32 (https://pypi.org/project/pywin32/). You can install this package using the following command:\n$ pip install pywin32\n\nOnce you have installed the package, you should be able to import win32api from PyInstaller.compat without any issues.\n"
] | [
1
] | [] | [] | [
"pyinstaller",
"python"
] | stackoverflow_0074675800_pyinstaller_python.txt |
Q:
React Native: How to detect phone with dynamic island?
Is it possible to target phones (iPhone 14 Pro and iPhone 14 Pro Max) with dynamic islands with React Native?
A:
The npm package react-native-dynamic-island might be what you are looking for. It's also available on GitHub so you can take a look at the implementation.
A:
Just to complement the other answer it is also possible to use react-native-device-info
const iPhonesWithDynamicIsland = ['iPhone15,2', 'iPhone15,3']; // iPhone 14 Pro, iPhone 14 Pro Max
const isIphoneWithDynamicIsland = iPhonesWithDynamicIsland.includes(DeviceInfo.getDeviceId());
console.log(isIphoneWithDynamicIsland);
or even simpler:
DeviceInfo.hasDynamicIsland()
| React Native: How to detect phone with dynamic island? | Is it possible to target phones (iPhone 14 Pro and iPhone 14 Pro Max) with dynamic islands with React Native?
| [
"The npm package react-native-dynamic-island might be what you are looking for. It's also available on GitHub so you can take a look at the implementation.\n",
"Just to complement the other answer it is also possible to use react-native-device-info\n const iPhonesWithDynamicIsland = ['iPhone15,2', 'iPhone15,3']; // iPhone 14 Pro, iPhone 14 Pro Max\n const isIphoneWithDynamicIsland = iPhonesWithDynamicIsland.includes(DeviceInfo.getDeviceId());\n console.log(isIphoneWithDynamicIsland);\n\nor even simpler:\nDeviceInfo.hasDynamicIsland()\n\n"
] | [
1,
0
] | [] | [] | [
"dynamic_island",
"react_native"
] | stackoverflow_0074675768_dynamic_island_react_native.txt |
Q:
I hit a roadblock on coding for my Dad's work
Here is my code, I am genuinley lost in trying to create multiple fill arcs:
public static final double PATTERN_LEFT = 50.0; // Left side of the pattern
public static final double PATTERN_TOP = 50.0; // Top of the pattern
public static final double PATTERN_SIZE = 300.0; // The size of the pattern on the window
/** YOUR DOCUMENTATION COMMENT */
public void drawStar(){
UI.clearGraphics();
UI.setColor(Color.black);
UI.drawOval(PATTERN_LEFT, PATTERN_TOP, PATTERN_SIZE, PATTERN_SIZE);
double num = UI.askInt("How many rays:");
double startAngle = 90;
double arcAngle = -90;
double amount = PATTERN_SIZE/num;
/*# YOUR CODE HERE */
for (int i = 0; i < num; i++){
double x = PATTERN_LEFT + 1 * amount;
UI.fillArc((PATTERN_LEFT+PATTERN_SIZE/2.0),PATTERN_TOP,(PATTERN_SIZE/num/2.0),PATTERN_SIZE,startAngle,arcAngle);
}
}
My outcome needs to look like this.
enter image description here
So far i am very very far off.
I couldn't do much sorry
| I hit a roadblock on coding for my Dad's work | Here is my code, I am genuinley lost in trying to create multiple fill arcs:
public static final double PATTERN_LEFT = 50.0; // Left side of the pattern
public static final double PATTERN_TOP = 50.0; // Top of the pattern
public static final double PATTERN_SIZE = 300.0; // The size of the pattern on the window
/** YOUR DOCUMENTATION COMMENT */
public void drawStar(){
UI.clearGraphics();
UI.setColor(Color.black);
UI.drawOval(PATTERN_LEFT, PATTERN_TOP, PATTERN_SIZE, PATTERN_SIZE);
double num = UI.askInt("How many rays:");
double startAngle = 90;
double arcAngle = -90;
double amount = PATTERN_SIZE/num;
/*# YOUR CODE HERE */
for (int i = 0; i < num; i++){
double x = PATTERN_LEFT + 1 * amount;
UI.fillArc((PATTERN_LEFT+PATTERN_SIZE/2.0),PATTERN_TOP,(PATTERN_SIZE/num/2.0),PATTERN_SIZE,startAngle,arcAngle);
}
}
My outcome needs to look like this.
enter image description here
So far i am very very far off.
I couldn't do much sorry
| [] | [] | [
"Here is a potential solution to your problem:\npublic static final double PATTERN_LEFT = 50.0; // Left side of the pattern\npublic static final double PATTERN_TOP = 50.0; // Top of the pattern\npublic static final double PATTERN_SIZE = 300.0; // The size of the pattern on the window\n\n/** YOUR DOCUMENTATION COMMENT /\npublic void drawStar(){\nUI.clearGraphics();\nUI.setColor(Color.black);\nUI.drawOval(PATTERN_LEFT, PATTERN_TOP, PATTERN_SIZE, PATTERN_SIZE);\ndouble num = UI.askInt(\"How many rays:\");\ndouble startAngle = 90;\ndouble arcAngle = -90;\ndouble amount = PATTERN_SIZE/num;\n/# YOUR CODE HERE */\nfor (int i = 0; i < num; i++){\ndouble x = PATTERN_LEFT + 1 * amount;\nUI.fillArc((PATTERN_LEFT+PATTERN_SIZE/2.0),PATTERN_TOP,(PATTERN_SIZE/num/2.0),PATTERN_SIZE,startAngle,arcAngle);\n// Add this line to shift the start angle for the next iteration\nstartAngle -= (360 / num);\n}\n}\n\nThis solution makes the following changes:\nIn the for loop, the start angle of the fillArc method is shifted by the number of degrees for each ray in each iteration. This ensures that the rays are evenly spaced around the circle.\nThe radius of the fillArc method is also adjusted so that each ray is of equal width.\n"
] | [
-1
] | [
"computer_science",
"java",
"loops",
"nested_loops"
] | stackoverflow_0074675904_computer_science_java_loops_nested_loops.txt |
Q:
update-database Format of the initialization string does not conform to specification starting at index 0
I am currently learning ASP.Net Core Web App (MVC), and facing an unexpected error when running update-database command in Package Manager Console after add-migration.
steps followed in PackageManager Console:
add-migration xyz
update-database
Error:
Format of the initialization string does not conform to specification starting at index 0.
I have installed all related Entity Framework dependencies (version 6.0.0)
This is the connection string I have made in appsettings.json:
"ConnectionStrings": {
"DefaultConnection": "Server=server_name;Database=db_name;Trusted_Connection=True;"
}
and this is what I have updated in program.cs :
var provider = builder.Services.BuildServiceProvider();
var configuration = provider.GetRequiredService<IConfiguration>();
// Add services to the container.
var connectionString = configuration.GetConnectionString("DefaultConnection");
builder.Services.AddControllersWithViews();
builder.Services.AddDbContext<db_name>(options =>
options.UseSqlServer("connectionString"));
to create DB context I have written this in the respective class:
public class DBxyz : DbContext
{
public DBxyz(DbContextOptions<DBxyz> options) : base(options)
{
}
public DbSet<table_1> t1 { get; set; }
public DbSet<table_2> t2 { get; set; }
public DbSet<table_3> t3 { get; set; }
public DbSet<table_4> t4 { get; set; }
}
What seems to be the problem here?
A:
Sometimes it helps to add in the connection string "Encrypted=false"
| update-database Format of the initialization string does not conform to specification starting at index 0 | I am currently learning ASP.Net Core Web App (MVC), and facing an unexpected error when running update-database command in Package Manager Console after add-migration.
steps followed in PackageManager Console:
add-migration xyz
update-database
Error:
Format of the initialization string does not conform to specification starting at index 0.
I have installed all related Entity Framework dependencies (version 6.0.0)
This is the connection string I have made in appsettings.json:
"ConnectionStrings": {
"DefaultConnection": "Server=server_name;Database=db_name;Trusted_Connection=True;"
}
and this is what I have updated in program.cs :
var provider = builder.Services.BuildServiceProvider();
var configuration = provider.GetRequiredService<IConfiguration>();
// Add services to the container.
var connectionString = configuration.GetConnectionString("DefaultConnection");
builder.Services.AddControllersWithViews();
builder.Services.AddDbContext<db_name>(options =>
options.UseSqlServer("connectionString"));
to create DB context I have written this in the respective class:
public class DBxyz : DbContext
{
public DBxyz(DbContextOptions<DBxyz> options) : base(options)
{
}
public DbSet<table_1> t1 { get; set; }
public DbSet<table_2> t2 { get; set; }
public DbSet<table_3> t3 { get; set; }
public DbSet<table_4> t4 { get; set; }
}
What seems to be the problem here?
| [
"Sometimes it helps to add in the connection string \"Encrypted=false\"\n"
] | [
0
] | [] | [] | [
"asp.net_mvc",
"c#",
"entity_framework_core"
] | stackoverflow_0074673734_asp.net_mvc_c#_entity_framework_core.txt |
Q:
mark struct as unmanaged in C# - Unity ECS Baker
I'm dealing with new ECS package (com.unity.entities) and have following code in my Monobehavior:
public class LevelBaker : Baker<LevelMono>
{
public override void Bake(LevelMono authoring)
{
AddComponent(new LevelProperties
{
SpawnDimensions = authoring.SpawnDimensions,
NeutralSpawnCount = authoring.NeutralSpawnCount,
NeutralActorPrefab = GetEntity(authoring.NeutralActorPrefab)
});
AddComponent(new LevelRandom
{
Value = Random.CreateFromIndex(authoring.RandomSeed)
});
}
}
Code runs ok, but Rider highlights the AddComponent method with
The type 'ComponentsAndTags.LevelProperties' must be valid unmanaged
type (simple numeric, 'bool', 'char', 'void', enumeration type or
non-generic struct type with all fields of unmanaged types at any
level of nesting) in order to use it as a type argument for 'T'
parameter
error as it has the definition like this a:
public void AddComponent<T>(in T component) where T : unmanaged, IComponentData
LevelProperties and LevelRandom are simple structs, containing only unmanaged types, but Rider seem to not know it. Here's code of LevelProperties:
public struct LevelProperties : IComponentData
{
public float2 SpawnDimensions;
public int NeutralSpawnCount;
public Entity NeutralActorPrefab;
}
How can I "mark" the LevelProperties struct as unmanaged so Rider would stop highliting it as an error?
I'm using newest, current version of Rider and Unity 2022.2.0b16. Code compiles and runs, only Rider shows error.
A:
Structs are unmanaged when their fields and properties are also unmanaged. The use of Entity as a type is probably the reason it is considered managed.
See https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/unmanaged-types
| mark struct as unmanaged in C# - Unity ECS Baker | I'm dealing with new ECS package (com.unity.entities) and have following code in my Monobehavior:
public class LevelBaker : Baker<LevelMono>
{
public override void Bake(LevelMono authoring)
{
AddComponent(new LevelProperties
{
SpawnDimensions = authoring.SpawnDimensions,
NeutralSpawnCount = authoring.NeutralSpawnCount,
NeutralActorPrefab = GetEntity(authoring.NeutralActorPrefab)
});
AddComponent(new LevelRandom
{
Value = Random.CreateFromIndex(authoring.RandomSeed)
});
}
}
Code runs ok, but Rider highlights the AddComponent method with
The type 'ComponentsAndTags.LevelProperties' must be valid unmanaged
type (simple numeric, 'bool', 'char', 'void', enumeration type or
non-generic struct type with all fields of unmanaged types at any
level of nesting) in order to use it as a type argument for 'T'
parameter
error as it has the definition like this a:
public void AddComponent<T>(in T component) where T : unmanaged, IComponentData
LevelProperties and LevelRandom are simple structs, containing only unmanaged types, but Rider seem to not know it. Here's code of LevelProperties:
public struct LevelProperties : IComponentData
{
public float2 SpawnDimensions;
public int NeutralSpawnCount;
public Entity NeutralActorPrefab;
}
How can I "mark" the LevelProperties struct as unmanaged so Rider would stop highliting it as an error?
I'm using newest, current version of Rider and Unity 2022.2.0b16. Code compiles and runs, only Rider shows error.
| [
"Structs are unmanaged when their fields and properties are also unmanaged. The use of Entity as a type is probably the reason it is considered managed.\nSee https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/unmanaged-types\n"
] | [
0
] | [] | [] | [
"c#",
"rider",
"struct",
"unity3d",
"unmanaged"
] | stackoverflow_0074667028_c#_rider_struct_unity3d_unmanaged.txt |
Q:
How to pass dropdown value in a form coming from a separate component
I have a form with two fields. One is a textfield and I cant get the data without any problem. The second field is a dropdown. This dropdown is a separate component within the form.
How can I pass the selected dropdown value with my form?
The setup is like this:
Form:
import { useState } from 'react';
import { SensorTypeDropdown } from '../add/SensorTypeDropdown'
const AddSensor = () => {
const [imei, setImei] = useState('');
const handleSubmit = (event: any) => {
alert('Sensor with IMEI: ' + imei + ' created.');
event.preventDefault(); //prevents page from refreshing
setImei('')//clears form input data
}
return (
<div className="container mx-auto">
<form className="bg-white shadow-md rounded px-8 pt-6 pb-8 mb-4" onSubmit={handleSubmit}>
<div className="mb-4">
<label className="block text-sm font-bold mb-2" htmlFor="imei">
IMEI
</label>
<input className="border rounded w-full py-2 px-3 focus:shadow-outline focus:outline-sky-700" value={imei} onChange={event => setImei(event.target.value)} id="sensorName" />
</div>
<div className="mb-6">
<label className="block text-sm font-bold mb-2" htmlFor="sensorType">
Sensor type
</label>
<SensorTypeDropdown/>
</div>
<div className="flex items-center justify-between">
<input type="submit" className="cursor-pointer bg-sky-700 hover:bg-sky-800 text-white font-bold py-2 px-4 rounded focus:shadow-outline" value="Create sensor" />
</div>
</form>
</div>
)
}
export default AddSensor;
And my separate dropdown component:
import { SharedAmbientSurrounding } from "@libs/data";
import { useState } from "react";
import { getEnumKeys } from "../../helpers/getEnumKeys";
export const SensorTypeDropdown = () => {
const [currentType, setCurrentType] = useState<SharedAmbientSurrounding>(SharedAmbientSurrounding.TEMPERATURE);
const [selectedType, setSelectedType] = useState('')
return (
<select
value={currentType}
onChange={(e) => {
setCurrentType(SharedAmbientSurrounding[e.target.value as keyof typeof SharedAmbientSurrounding]);
}}
>
{getEnumKeys(SharedAmbientSurrounding).map((key, index) => (
<option key={index} value={SharedAmbientSurrounding[key]}>
{key}
</option>
))}
</select>
);
}
Any help is appreciated. Thanks!
A:
You should manage the dropdown state in the parent component AddSensor
AddSendor
const AddSensor = () => {
const [imei, setImei] = useState('');
const [type, setType] = useState(SharedAmbientSurrounding.TEMPERATURE);
const handleSubmit = (event) => {
alert('Sensor with IMEI: ' + imei + ' and type: ' + type + ' created.');
event.preventDefault(); //prevents page from refreshing
//clears form input data
setImei('');
setType(SharedAmbientSurrounding.TEMPERATURE);
}
....
<div className="mb-6">
<label className="block text-sm font-bold mb-2" htmlFor="sensorType">
Sensor type
</label>
<SensorTypeDropdown value={type} onChange={setType} />
</div>
...
}
SensorTypeDropdown
export const SensorTypeDropdown = ({value, onChange}) => {
return (
<select
value={value}
onChange={e => onChange(e.target.value)}
>
{getEnumKeys(SharedAmbientSurrounding).map((key, index) => (
<option key={key} value={SharedAmbientSurrounding[key]}>
{key}
</option>
))}
</select>
);
}
| How to pass dropdown value in a form coming from a separate component | I have a form with two fields. One is a textfield and I cant get the data without any problem. The second field is a dropdown. This dropdown is a separate component within the form.
How can I pass the selected dropdown value with my form?
The setup is like this:
Form:
import { useState } from 'react';
import { SensorTypeDropdown } from '../add/SensorTypeDropdown'
const AddSensor = () => {
const [imei, setImei] = useState('');
const handleSubmit = (event: any) => {
alert('Sensor with IMEI: ' + imei + ' created.');
event.preventDefault(); //prevents page from refreshing
setImei('')//clears form input data
}
return (
<div className="container mx-auto">
<form className="bg-white shadow-md rounded px-8 pt-6 pb-8 mb-4" onSubmit={handleSubmit}>
<div className="mb-4">
<label className="block text-sm font-bold mb-2" htmlFor="imei">
IMEI
</label>
<input className="border rounded w-full py-2 px-3 focus:shadow-outline focus:outline-sky-700" value={imei} onChange={event => setImei(event.target.value)} id="sensorName" />
</div>
<div className="mb-6">
<label className="block text-sm font-bold mb-2" htmlFor="sensorType">
Sensor type
</label>
<SensorTypeDropdown/>
</div>
<div className="flex items-center justify-between">
<input type="submit" className="cursor-pointer bg-sky-700 hover:bg-sky-800 text-white font-bold py-2 px-4 rounded focus:shadow-outline" value="Create sensor" />
</div>
</form>
</div>
)
}
export default AddSensor;
And my separate dropdown component:
import { SharedAmbientSurrounding } from "@libs/data";
import { useState } from "react";
import { getEnumKeys } from "../../helpers/getEnumKeys";
export const SensorTypeDropdown = () => {
const [currentType, setCurrentType] = useState<SharedAmbientSurrounding>(SharedAmbientSurrounding.TEMPERATURE);
const [selectedType, setSelectedType] = useState('')
return (
<select
value={currentType}
onChange={(e) => {
setCurrentType(SharedAmbientSurrounding[e.target.value as keyof typeof SharedAmbientSurrounding]);
}}
>
{getEnumKeys(SharedAmbientSurrounding).map((key, index) => (
<option key={index} value={SharedAmbientSurrounding[key]}>
{key}
</option>
))}
</select>
);
}
Any help is appreciated. Thanks!
| [
"You should manage the dropdown state in the parent component AddSensor\nAddSendor\nconst AddSensor = () => {\n const [imei, setImei] = useState('');\n const [type, setType] = useState(SharedAmbientSurrounding.TEMPERATURE);\n\n const handleSubmit = (event) => {\n alert('Sensor with IMEI: ' + imei + ' and type: ' + type + ' created.');\n event.preventDefault(); //prevents page from refreshing\n //clears form input data\n setImei('');\n setType(SharedAmbientSurrounding.TEMPERATURE);\n }\n\n ....\n\n <div className=\"mb-6\">\n <label className=\"block text-sm font-bold mb-2\" htmlFor=\"sensorType\">\n Sensor type\n </label>\n <SensorTypeDropdown value={type} onChange={setType} />\n </div>\n ...\n}\n\nSensorTypeDropdown\nexport const SensorTypeDropdown = ({value, onChange}) => {\n return (\n <select \n value={value} \n onChange={e => onChange(e.target.value)}\n >\n {getEnumKeys(SharedAmbientSurrounding).map((key, index) => (\n <option key={key} value={SharedAmbientSurrounding[key]}>\n {key}\n </option>\n ))}\n </select>\n );\n}\n\n"
] | [
0
] | [] | [] | [
"react_typescript",
"reactjs",
"typescript"
] | stackoverflow_0074675841_react_typescript_reactjs_typescript.txt |
Q:
How to solve TextView being cropped in small devices because of layout margin?
I have a ConstraintLayout with some elements including an AppCompatTextView. Also, I am using app:autoSizeTextType="uniform" in the AppCompatTextView to resize it according to the screen size (so, to make this work the AppCompatTextView width and height is 0dp (match_parent)) . The problem is that to make the text look a normal size in a 5" and 6" devices, I have added a layout_marginTop and a layout_marginBottom to make the layout of the AppCompatTextView smaller so that the AppCompatTextView size is resizing to a smaller size as its layout is smaller. The problem is that in small devices like 3.7" or 4" the AppCompatTextView gets cropped because of its layout because of the layout_margin is so big for that screen density that the text doesn't fit.
<androidx.appcompat.widget.AppCompatTextView
android:id="@+id/tev"
android:layout_width="0dp"
android:layout_height="0dp"
android:gravity="start"
android:text="@string/strng"
app:autoSizeTextType="uniform"
app:layout_constraintBottom_toBottomOf="@id/dr2"
app:layout_constraintEnd_toStartOf="@id/dlo"
app:layout_constraintStart_toEndOf="@id/miu"
app:layout_constraintTop_toTopOf="@id/cra"
android:layout_marginTop="12dp"
android:layout_marginBottom="12dp"
/>
See how the AppCompatTextView crops (The text is "Example TextView"):
I have seen that a possible solution could be to set app:autoSizeMinTextSize to a small dp, but I have thought that it could be a better solution to set a dynamic layout margin according to the screen size so that the layout margin adapts to the screen size, could that be a good solution?
A:
Its hard to know what would work for you since we only see a portion of your code/window, but I think this could work
Option A: specify the min, max and granularity of the autoSize, so you can control how the text changes
android:autoSizeMinTextSize="12sp"
android:autoSizeMaxTextSize="100sp"
android:autoSizeStepGranularity="2sp"
Option B: create a set of sizes in resources and use android:autoSizePresetSizes="@array/autosize_text_sizes"
In res/values/arrays.xml create
<resources>
<array name="autosize_text_sizes">
<item>10sp</item>
<item>12sp</item>
<item>20sp</item>
<item>40sp</item>
<item>100sp</item>
</array>
</resources>
| How to solve TextView being cropped in small devices because of layout margin? | I have a ConstraintLayout with some elements including an AppCompatTextView. Also, I am using app:autoSizeTextType="uniform" in the AppCompatTextView to resize it according to the screen size (so, to make this work the AppCompatTextView width and height is 0dp (match_parent)) . The problem is that to make the text look a normal size in a 5" and 6" devices, I have added a layout_marginTop and a layout_marginBottom to make the layout of the AppCompatTextView smaller so that the AppCompatTextView size is resizing to a smaller size as its layout is smaller. The problem is that in small devices like 3.7" or 4" the AppCompatTextView gets cropped because of its layout because of the layout_margin is so big for that screen density that the text doesn't fit.
<androidx.appcompat.widget.AppCompatTextView
android:id="@+id/tev"
android:layout_width="0dp"
android:layout_height="0dp"
android:gravity="start"
android:text="@string/strng"
app:autoSizeTextType="uniform"
app:layout_constraintBottom_toBottomOf="@id/dr2"
app:layout_constraintEnd_toStartOf="@id/dlo"
app:layout_constraintStart_toEndOf="@id/miu"
app:layout_constraintTop_toTopOf="@id/cra"
android:layout_marginTop="12dp"
android:layout_marginBottom="12dp"
/>
See how the AppCompatTextView crops (The text is "Example TextView"):
I have seen that a possible solution could be to set app:autoSizeMinTextSize to a small dp, but I have thought that it could be a better solution to set a dynamic layout margin according to the screen size so that the layout margin adapts to the screen size, could that be a good solution?
| [
"Its hard to know what would work for you since we only see a portion of your code/window, but I think this could work\nOption A: specify the min, max and granularity of the autoSize, so you can control how the text changes\nandroid:autoSizeMinTextSize=\"12sp\"\nandroid:autoSizeMaxTextSize=\"100sp\"\nandroid:autoSizeStepGranularity=\"2sp\"\n\nOption B: create a set of sizes in resources and use android:autoSizePresetSizes=\"@array/autosize_text_sizes\"\nIn res/values/arrays.xml create\n<resources>\n <array name=\"autosize_text_sizes\">\n <item>10sp</item>\n <item>12sp</item>\n <item>20sp</item>\n <item>40sp</item>\n <item>100sp</item>\n </array>\n</resources>\n\n"
] | [
0
] | [] | [] | [
"android",
"android_layout"
] | stackoverflow_0074675738_android_android_layout.txt |
Q:
ViewPager2 with horizontal scrollView inside
I implemented the new ViewPager for my project.
The viewPager2 contains a list of fragment
private class ViewPagerAdapter extends FragmentStateAdapter {
private ArrayList<Integer> classifiedIds;
ViewPagerAdapter(@NonNull Fragment fragment, final ArrayList<Integer> classifiedIds) {
super(fragment);
this.classifiedIds = classifiedIds;
}
@NonNull
@Override
public Fragment createFragment(int position) {
return DetailsFragment.newInstance(classifiedIds.get(position));
}
@Override
public int getItemCount() {
return classifiedIds.size();
}
}
Inside the fragment I got an horizontal recyclerView
LinearLayoutManager layoutManager = new LinearLayoutManager(getContext(), LinearLayoutManager.HORIZONTAL, false);
recyclerViewPicture.setLayoutManager(layoutManager);
The issue is when I try to scroll the recyclerview the viewPager take the touch and swap to the next fragment
When I was using the old ViewPager I didn't have this issue
A:
I met the same problem: using AndroidX, a ViewPager2 (with horizontal orientation) having a RecyclerView (with horizontal orientation) inside one of its page.
The working solution I found is from Google issueTracker. Here is my Java translation of the Kotlin class:
import android.content.Context;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.View;
import android.view.ViewConfiguration;
import android.widget.FrameLayout;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.viewpager2.widget.ViewPager2;
// from https://issuetracker.google.com/issues/123006042#comment21
/**
* Layout to wrap a scrollable component inside a ViewPager2. Provided as a solution to the problem
* where pages of ViewPager2 have nested scrollable elements that scroll in the same direction as
* ViewPager2. The scrollable element needs to be the immediate and only child of this host layout.
*
* This solution has limitations when using multiple levels of nested scrollable elements
* (e.g. a horizontal RecyclerView in a vertical RecyclerView in a horizontal ViewPager2).
*/
public class NestedScrollableHost extends FrameLayout {
private int touchSlop = 0;
private float initialX = 0.0f;
private float initialY = 0.0f;
private ViewPager2 parentViewPager() {
View v = (View)this.getParent();
while( v != null && !(v instanceof ViewPager2) )
v = (View)v.getParent();
return (ViewPager2)v;
}
private View child() { return (this.getChildCount() > 0 ? this.getChildAt(0) : null); }
private void init() {
this.touchSlop = ViewConfiguration.get(this.getContext()).getScaledTouchSlop();
}
public NestedScrollableHost(@NonNull Context context) {
super(context);
this.init();
}
public NestedScrollableHost(@NonNull Context context, @Nullable AttributeSet attrs) {
super(context, attrs);
this.init();
}
public NestedScrollableHost(@NonNull Context context, @Nullable AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
this.init();
}
public NestedScrollableHost(@NonNull Context context, @Nullable AttributeSet attrs, int defStyleAttr, int defStyleRes) {
super(context, attrs, defStyleAttr, defStyleRes);
this.init();
}
private boolean canChildScroll(int orientation, Float delta) {
int direction = (int)(Math.signum(-delta));
View child = this.child();
if( child == null )
return false;
if( orientation == 0 )
return child.canScrollHorizontally(direction);
if( orientation == 1 )
return child.canScrollVertically(direction);
return false;
}
@Override
public boolean onInterceptTouchEvent(MotionEvent ev) {
this.handleInterceptTouchEvent(ev);
return super.onInterceptTouchEvent(ev);
}
private void handleInterceptTouchEvent(MotionEvent ev) {
ViewPager2 vp = this.parentViewPager();
if( vp == null )
return;
int orientation = vp.getOrientation();
// Early return if child can't scroll in same direction as parent
if( !this.canChildScroll(orientation, -1.0f) && !this.canChildScroll(orientation, 1.0f) )
return;
if( ev.getAction() == MotionEvent.ACTION_DOWN ) {
this.initialX = ev.getX();
this.initialY = ev.getY();
this.getParent().requestDisallowInterceptTouchEvent(true);
}
else if( ev.getAction() == MotionEvent.ACTION_MOVE ) {
float dx = ev.getX() - this.initialX;
float dy = ev.getY() - this.initialY;
boolean isVpHorizontal = (orientation == ViewPager2.ORIENTATION_HORIZONTAL);
// assuming ViewPager2 touch-slop is 2x touch-slop of child
float scaleDx = Math.abs(dx) * (isVpHorizontal ? 0.5f : 1.0f);
float scaleDy = Math.abs(dy) * (isVpHorizontal ? 1.0f : 0.5f);
if( scaleDx > this.touchSlop || scaleDy > this.touchSlop ) {
if( isVpHorizontal == (scaleDy > scaleDx) ) {
// Gesture is perpendicular, allow all parents to intercept
this.getParent().requestDisallowInterceptTouchEvent(false);
}
else {
// Gesture is parallel, query child if movement in that direction is possible
if( this.canChildScroll(orientation, (isVpHorizontal ? dx : dy)) ) {
this.getParent().requestDisallowInterceptTouchEvent(true);
}
else {
// Child cannot scroll, allow all parents to intercept
this.getParent().requestDisallowInterceptTouchEvent(false);
}
}
}
}
}
}
Then, just embed your nested RecyclerView inside a NestedScrollableHost container:
<mywishlist.sdk.Base.NestedScrollableHost
android:layout_width="match_parent"
android:layout_height="match_parent">
<androidx.recyclerview.widget.RecyclerView
android:id="@+id/photos"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/photolist_collection_background"
android:orientation="horizontal">
</androidx.recyclerview.widget.RecyclerView>
</mywishlist.sdk.Base.NestedScrollableHost>
It solved my scrolling conflict between the nested RecyclerView and its hosting ViewPager2.
A:
I find a solution it's a know bug as you can see here https://issuetracker.google.com/issues/123006042 maybe they would solve it in the next updates
Thanks to TakeInfos and the exemple project inside the link
recyclerViewPicture.addOnItemTouchListener(new RecyclerView.OnItemTouchListener() {
int lastX = 0;
@Override
public boolean onInterceptTouchEvent(@NonNull RecyclerView rv, @NonNull MotionEvent e) {
switch (e.getAction()) {
case MotionEvent.ACTION_DOWN:
lastX = (int) e.getX();
break;
case MotionEvent.ACTION_MOVE:
boolean isScrollingRight = e.getX() < lastX;
if ((isScrollingRight && ((LinearLayoutManager) recyclerViewPicture.getLayoutManager()).findLastCompletelyVisibleItemPosition() == recyclerViewPicture.getAdapter().getItemCount() - 1) ||
(!isScrollingRight && ((LinearLayoutManager) recyclerViewPicture.getLayoutManager()).findFirstCompletelyVisibleItemPosition() == 0)) {
viewPager.setUserInputEnabled(true);
} else {
viewPager.setUserInputEnabled(false);
}
break;
case MotionEvent.ACTION_UP:
lastX = 0;
viewPager.setUserInputEnabled(true);
break;
}
return false;
}
@Override
public void onTouchEvent(@NonNull RecyclerView rv, @NonNull MotionEvent e) {
}
@Override
public void onRequestDisallowInterceptTouchEvent(boolean disallowIntercept) {
}
});
I'm checking if the user scroll on the right or on the left. If the user reach the end or the start of the recyclerView I'm enable or disable the swipe on the view pager
A:
In my opinion, this solution (stolen from Daniel Knauf post) is much simpler than creating a wrapper but still not official:
recyclerViewPicture.addOnItemTouchListener(
object : RecyclerView.OnItemTouchListener {
private var startX = 0f
override fun onInterceptTouchEvent(
recyclerView: RecyclerView,
event: MotionEvent
): Boolean =
when (event.action) {
MotionEvent.ACTION_DOWN -> startX = event.x
MotionEvent.ACTION_MOVE -> {
val isScrollingRight = event.x < startX
val scrollItemsToRight = isScrollingRight && recyclerView.canScrollRight
val scrollItemsToLeft = !isScrollingRight && recyclerView.canScrollLeft
val disallowIntercept = scrollItemsToRight || scrollItemsToLeft
recyclerView.parent.requestDisallowInterceptTouchEvent(disallowIntercept)
}
MotionEvent.ACTION_UP -> startX = 0f
else -> Unit
}.let { false }
override fun onTouchEvent(rv: RecyclerView, e: MotionEvent) = Unit
override fun onRequestDisallowInterceptTouchEvent(disallowIntercept: Boolean) = Unit
}
)
val RecyclerView.canScrollRight: Boolean
get() = canScrollHorizontally(SCROLL_DIRECTION_RIGHT)
val RecyclerView.canScrollLeft: Boolean
get() = canScrollHorizontally(SCROLL_DIRECTION_LEFT)
private const val SCROLL_DIRECTION_RIGHT = 1
private const val SCROLL_DIRECTION_LEFT = -1
| ViewPager2 with horizontal scrollView inside | I implemented the new ViewPager for my project.
The viewPager2 contains a list of fragment
private class ViewPagerAdapter extends FragmentStateAdapter {
private ArrayList<Integer> classifiedIds;
ViewPagerAdapter(@NonNull Fragment fragment, final ArrayList<Integer> classifiedIds) {
super(fragment);
this.classifiedIds = classifiedIds;
}
@NonNull
@Override
public Fragment createFragment(int position) {
return DetailsFragment.newInstance(classifiedIds.get(position));
}
@Override
public int getItemCount() {
return classifiedIds.size();
}
}
Inside the fragment I got an horizontal recyclerView
LinearLayoutManager layoutManager = new LinearLayoutManager(getContext(), LinearLayoutManager.HORIZONTAL, false);
recyclerViewPicture.setLayoutManager(layoutManager);
The issue is when I try to scroll the recyclerview the viewPager take the touch and swap to the next fragment
When I was using the old ViewPager I didn't have this issue
| [
"I met the same problem: using AndroidX, a ViewPager2 (with horizontal orientation) having a RecyclerView (with horizontal orientation) inside one of its page.\nThe working solution I found is from Google issueTracker. Here is my Java translation of the Kotlin class:\nimport android.content.Context;\nimport android.util.AttributeSet;\nimport android.view.MotionEvent;\nimport android.view.View;\nimport android.view.ViewConfiguration;\nimport android.widget.FrameLayout;\n\nimport androidx.annotation.NonNull;\nimport androidx.annotation.Nullable;\nimport androidx.viewpager2.widget.ViewPager2;\n\n// from https://issuetracker.google.com/issues/123006042#comment21\n\n/**\n * Layout to wrap a scrollable component inside a ViewPager2. Provided as a solution to the problem\n * where pages of ViewPager2 have nested scrollable elements that scroll in the same direction as\n * ViewPager2. The scrollable element needs to be the immediate and only child of this host layout.\n *\n * This solution has limitations when using multiple levels of nested scrollable elements\n * (e.g. a horizontal RecyclerView in a vertical RecyclerView in a horizontal ViewPager2).\n */\n\npublic class NestedScrollableHost extends FrameLayout {\n\n private int touchSlop = 0;\n private float initialX = 0.0f;\n private float initialY = 0.0f;\n\n private ViewPager2 parentViewPager() {\n View v = (View)this.getParent();\n while( v != null && !(v instanceof ViewPager2) )\n v = (View)v.getParent();\n return (ViewPager2)v;\n }\n\n private View child() { return (this.getChildCount() > 0 ? this.getChildAt(0) : null); }\n\n private void init() {\n this.touchSlop = ViewConfiguration.get(this.getContext()).getScaledTouchSlop();\n }\n\n public NestedScrollableHost(@NonNull Context context) {\n super(context);\n this.init();\n }\n\n public NestedScrollableHost(@NonNull Context context, @Nullable AttributeSet attrs) {\n super(context, attrs);\n this.init();\n }\n\n public NestedScrollableHost(@NonNull Context context, @Nullable AttributeSet attrs, int defStyleAttr) {\n super(context, attrs, defStyleAttr);\n this.init();\n }\n\n public NestedScrollableHost(@NonNull Context context, @Nullable AttributeSet attrs, int defStyleAttr, int defStyleRes) {\n super(context, attrs, defStyleAttr, defStyleRes);\n this.init();\n }\n\n private boolean canChildScroll(int orientation, Float delta) {\n int direction = (int)(Math.signum(-delta));\n View child = this.child();\n\n if( child == null )\n return false;\n\n if( orientation == 0 )\n return child.canScrollHorizontally(direction);\n if( orientation == 1 )\n return child.canScrollVertically(direction);\n\n return false;\n }\n\n @Override\n public boolean onInterceptTouchEvent(MotionEvent ev) {\n this.handleInterceptTouchEvent(ev);\n return super.onInterceptTouchEvent(ev);\n }\n\n private void handleInterceptTouchEvent(MotionEvent ev) {\n ViewPager2 vp = this.parentViewPager();\n if( vp == null )\n return;\n\n int orientation = vp.getOrientation();\n\n // Early return if child can't scroll in same direction as parent\n if( !this.canChildScroll(orientation, -1.0f) && !this.canChildScroll(orientation, 1.0f) )\n return;\n\n if( ev.getAction() == MotionEvent.ACTION_DOWN ) {\n this.initialX = ev.getX();\n this.initialY = ev.getY();\n this.getParent().requestDisallowInterceptTouchEvent(true);\n }\n else if( ev.getAction() == MotionEvent.ACTION_MOVE ) {\n float dx = ev.getX() - this.initialX;\n float dy = ev.getY() - this.initialY;\n boolean isVpHorizontal = (orientation == ViewPager2.ORIENTATION_HORIZONTAL);\n\n // assuming ViewPager2 touch-slop is 2x touch-slop of child\n float scaleDx = Math.abs(dx) * (isVpHorizontal ? 0.5f : 1.0f);\n float scaleDy = Math.abs(dy) * (isVpHorizontal ? 1.0f : 0.5f);\n\n if( scaleDx > this.touchSlop || scaleDy > this.touchSlop ) {\n if( isVpHorizontal == (scaleDy > scaleDx) ) {\n // Gesture is perpendicular, allow all parents to intercept\n this.getParent().requestDisallowInterceptTouchEvent(false);\n }\n else {\n // Gesture is parallel, query child if movement in that direction is possible\n if( this.canChildScroll(orientation, (isVpHorizontal ? dx : dy)) ) {\n this.getParent().requestDisallowInterceptTouchEvent(true);\n }\n else {\n // Child cannot scroll, allow all parents to intercept\n this.getParent().requestDisallowInterceptTouchEvent(false);\n }\n }\n }\n }\n }\n}\n\nThen, just embed your nested RecyclerView inside a NestedScrollableHost container:\n<mywishlist.sdk.Base.NestedScrollableHost\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\">\n\n <androidx.recyclerview.widget.RecyclerView\n android:id=\"@+id/photos\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:background=\"@color/photolist_collection_background\"\n android:orientation=\"horizontal\">\n\n </androidx.recyclerview.widget.RecyclerView>\n\n</mywishlist.sdk.Base.NestedScrollableHost>\n\nIt solved my scrolling conflict between the nested RecyclerView and its hosting ViewPager2.\n",
"I find a solution it's a know bug as you can see here https://issuetracker.google.com/issues/123006042 maybe they would solve it in the next updates\nThanks to TakeInfos and the exemple project inside the link\n recyclerViewPicture.addOnItemTouchListener(new RecyclerView.OnItemTouchListener() {\n int lastX = 0;\n @Override\n public boolean onInterceptTouchEvent(@NonNull RecyclerView rv, @NonNull MotionEvent e) {\n switch (e.getAction()) {\n case MotionEvent.ACTION_DOWN:\n lastX = (int) e.getX();\n break;\n case MotionEvent.ACTION_MOVE:\n boolean isScrollingRight = e.getX() < lastX;\n if ((isScrollingRight && ((LinearLayoutManager) recyclerViewPicture.getLayoutManager()).findLastCompletelyVisibleItemPosition() == recyclerViewPicture.getAdapter().getItemCount() - 1) ||\n (!isScrollingRight && ((LinearLayoutManager) recyclerViewPicture.getLayoutManager()).findFirstCompletelyVisibleItemPosition() == 0)) {\n viewPager.setUserInputEnabled(true);\n } else {\n viewPager.setUserInputEnabled(false);\n }\n break;\n case MotionEvent.ACTION_UP:\n lastX = 0;\n viewPager.setUserInputEnabled(true);\n break;\n }\n return false;\n }\n\n @Override\n public void onTouchEvent(@NonNull RecyclerView rv, @NonNull MotionEvent e) {\n }\n\n @Override\n public void onRequestDisallowInterceptTouchEvent(boolean disallowIntercept) {\n\n }\n });\n\nI'm checking if the user scroll on the right or on the left. If the user reach the end or the start of the recyclerView I'm enable or disable the swipe on the view pager\n",
"In my opinion, this solution (stolen from Daniel Knauf post) is much simpler than creating a wrapper but still not official:\nrecyclerViewPicture.addOnItemTouchListener(\n object : RecyclerView.OnItemTouchListener {\n private var startX = 0f\n\n override fun onInterceptTouchEvent(\n recyclerView: RecyclerView,\n event: MotionEvent\n ): Boolean =\n when (event.action) {\n MotionEvent.ACTION_DOWN -> startX = event.x\n MotionEvent.ACTION_MOVE -> {\n val isScrollingRight = event.x < startX\n val scrollItemsToRight = isScrollingRight && recyclerView.canScrollRight\n val scrollItemsToLeft = !isScrollingRight && recyclerView.canScrollLeft\n val disallowIntercept = scrollItemsToRight || scrollItemsToLeft\n recyclerView.parent.requestDisallowInterceptTouchEvent(disallowIntercept)\n }\n MotionEvent.ACTION_UP -> startX = 0f\n else -> Unit\n }.let { false }\n\n override fun onTouchEvent(rv: RecyclerView, e: MotionEvent) = Unit\n override fun onRequestDisallowInterceptTouchEvent(disallowIntercept: Boolean) = Unit\n }\n)\n\nval RecyclerView.canScrollRight: Boolean\n get() = canScrollHorizontally(SCROLL_DIRECTION_RIGHT)\n\nval RecyclerView.canScrollLeft: Boolean\n get() = canScrollHorizontally(SCROLL_DIRECTION_LEFT)\n\nprivate const val SCROLL_DIRECTION_RIGHT = 1\nprivate const val SCROLL_DIRECTION_LEFT = -1\n\n"
] | [
18,
13,
0
] | [
"Call ViewGroup#onInterceptTouchEvent(MotionEvent). \nSee This Documentation\n"
] | [
-1
] | [
"android",
"android_recyclerview",
"android_touch_event",
"android_viewpager2"
] | stackoverflow_0057587618_android_android_recyclerview_android_touch_event_android_viewpager2.txt |
Q:
Juniper SRX add custom BGP route
I have BGP configured between AzureStack (win2k16) and SRX210. On the Juniper I see all routes advertised but the Juniper is only advertising its physical interface networks..
I want the Juniper to also include all static routes that are configured towards the 2k16 machine..
Config NOW (on juniper)
policy-statement send-direct {
term 1 {
from protocol direct;
then accept;
}
group AzureStack {
type internal;
multihop {
ttl 50;
}
export send-direct;
neighbor 172.16.7.14 {
local-address 172.16.7.1;
peer-as 65050;
local-as 65050;
}
}
Received on 2k16
DestinationNetwork NextHop
172.16.4.0/29 172.16.7.1 Juniper
172.16.5.0/24 172.16.7.1 Juniper
172.16.6.0/24 172.16.7.1 Juniper
But my Juniper for example has a static route to 172.16.8.0/22 which I want to include in the bgp advertisement..
A:
You would want to modify your policy to advertise static-routes. Maybe create another term in your policy.
set policy-options policy-statement send-direct term2 from protocol static
set policy-options policy-statement send-direct term2 then accept
A:
To include static routes on the Juniper SRX210 in the BGP configuration, you will need to add them as BGP network statements. This can be done by using the following command:
set protocols bgp neighbor <AzureStack-IP-address> network <static-route-network>
You will need to replace with the IP address of the AzureStack machine and with the network address of the static route.
Once you have added the static routes as BGP network statements, you can verify that they are being advertised by using the "show route protocol bgp" command on the Juniper SRX210. This should display all of the BGP routes, including the static routes that you have added.
| Juniper SRX add custom BGP route | I have BGP configured between AzureStack (win2k16) and SRX210. On the Juniper I see all routes advertised but the Juniper is only advertising its physical interface networks..
I want the Juniper to also include all static routes that are configured towards the 2k16 machine..
Config NOW (on juniper)
policy-statement send-direct {
term 1 {
from protocol direct;
then accept;
}
group AzureStack {
type internal;
multihop {
ttl 50;
}
export send-direct;
neighbor 172.16.7.14 {
local-address 172.16.7.1;
peer-as 65050;
local-as 65050;
}
}
Received on 2k16
DestinationNetwork NextHop
172.16.4.0/29 172.16.7.1 Juniper
172.16.5.0/24 172.16.7.1 Juniper
172.16.6.0/24 172.16.7.1 Juniper
But my Juniper for example has a static route to 172.16.8.0/22 which I want to include in the bgp advertisement..
| [
"You would want to modify your policy to advertise static-routes. Maybe create another term in your policy.\nset policy-options policy-statement send-direct term2 from protocol static \nset policy-options policy-statement send-direct term2 then accept\n\n",
"To include static routes on the Juniper SRX210 in the BGP configuration, you will need to add them as BGP network statements. This can be done by using the following command:\nset protocols bgp neighbor <AzureStack-IP-address> network <static-route-network>\n\nYou will need to replace with the IP address of the AzureStack machine and with the network address of the static route.\nOnce you have added the static routes as BGP network statements, you can verify that they are being advertised by using the \"show route protocol bgp\" command on the Juniper SRX210. This should display all of the BGP routes, including the static routes that you have added.\n"
] | [
0,
0
] | [] | [] | [
"bgp",
"juniper"
] | stackoverflow_0046794093_bgp_juniper.txt |
Q:
Is it possible to use Live-server for PHP with autoreload on save?
I tried to use the Live-Server extension on VS Code for PHP, but it only opened the "root" of the "served" project folder and showed the index.php as a downloadable file link.
Then I read about the Live-Server Web Extension and installed it, but it still did not work.
(Yes, I did enable the web extension inside Live-Server config settings in VS Code).
I've also tried to use the PHP Server extension, which does good job for serving the project, instead of using Apache in XAMPP, but I have not found a way to reload when saving.
Is there even a way to autoreload PHP on PHP Server?
Does the Live-Server Web Extension require something else other than the Live-Server installed in VS Code and enabling the web extension in Live-Server config settings?
I've seen that it works for some people in gifs/videos, but I didn't manage to work it out.
A:
First of all, I want to tell you that Live Server which is available in visual code market place is the solution to your problem. It works mainly with the static webpage like HTML but also works with dynamic webpages like PHP, NodeJs and ASP.NET in a tricky way. In the following example I will guide you to install a live server which works with both webpages (static & dynamic).
Install PHP Server and Live Server from VS Code market place.
Create a PHP file, example index.php and place it in any sub-directory(say, demo) under /var/www/html/, like /var/www/html/demo/
Install the live server extension in the chrome browser and edit like this.
Now click on the "Go Live" button in the VS Code.
5.Now open the index.php file which is placed under /var/www/html/demo/ in VS Code and right click and select "PHP Server: Reload Server" then "PHP Server: Open file in browser".
In the Browser just open the IP address
http://localhost:3000/demo/index.php
you will see that it is working in a live server with a dynamic webpage like PHP. When you edit and save the index.php file with the running VS Code, it will automatically updated on that IP address.
A:
I was having a similar issue and I think I have found a workaround. With php server and live server installed, go to the web extension for live server and check "I don't want a proxy setup". For the actual server address put in your php server address (for me the default was http://localhost:3000/) and for live server address type in http://127.0.0.1:5500 if you kept the live server default address and port. In my settings.json I had "liveServer.settings.useWebExt" set to true, but setting it to false didn't make a difference for me for some reason.
I couldn't figure out how to get it to open the php server address rather than the live server address when pressing "go live". It still brings up the directory structure and I think the issue lies with live server not being in the working directory of the php server, if that makes sense. However, if you go to the php server address (localhost:3000/) the php pages worked for me and pressing ctrl+s to save updated the page correctly.
I hope this helps!
A:
For having auto reload in PHP files in Visual Studio Code:
Install Live Server extension.
Install PHP Server extension.
Config PHP Server: (PHP Config Path), (PHP Path).
Install Google Chrome Live Server extensions.
Open your PHP file in Visual Studio Code and 'Click to run Lie Server'.
Copy the opened page address and past it in "Live Server Address" in live server chrome extension and click Apply.
Switch again to Visual Studio Code and right click on your PHP file and click on "PHP Server: Reload server", it will open your PHP file in browser and just copy IP and port (for example: http://localhost:3000) and paste it on "Actual Server Address" in live server of Google Chrome extension and click apply.
Turn On live Reload in live server of chrome extension.
Now whenever you run your PHP file with "PHP Server: Reload server", it will reload automatically on each saving.
Note: For having good experience of automatically reload active Autosave and set 400ms for its delay.
A:
put this meta tag to your code that work for me <meta http-equiv="refresh" content="10">
A:
This solution works for me but i had to install a local copy of php-7.4.24, and configure the PHP_Server extention to look at that, then I kept getting mysqli and curl errors until I edited the php.ini and added the full paths to the extensions in my local install of php7
extension="C:\php-7.4.24\ext\php_mysqli.dll"
extension="C:\php-7.4.24\ext\php_curl.dll"
just using the following didn't work:
extension="php_mysqli.dll"
extension="php_curl.dll"
A:
I did all from the first answer, but what I found is all I had to do was to
Install the Live Server plugin for Firefox and VSCode
For Laravel I ran php artisan serve --host 192.168.0.104 --port 8001
Go in VSCode bottom right -> Go Live, then a page opens, I close it
In Firefox plugin I changed the links, Actual server address as http://192.168.0.104:8001 and Live server address as those pages that don't work, which was http://127.0.0.1:5500
That's it.
A:
There is a improved version of live-server for vs-code, which is called five-server.
This extension does support PHP.
You have to place a fiveserver.config.js into the root directory of your project,
this could be an example configuration:
module.exports = {
php: "/usr/bin/php", //php executable
root: 'www', //root directory of your project, where the liveserver looks
open: 'index.php', //entrypoint of your php project
injectBody: true //enable live reloading
}
Also be sure to add a HTML tag into your index.php, otherwise it won't work (404 Error).
A sample index.php which works for me:
//index.php
<!DOCTYPE html>
<html lang="en">
<body>
<?php
echo "The Force is strong with you";
?>
</body>
</html>
A:
After a lot of trial and error I got it working with these settings.
Click to see imagen configuration
| Is it possible to use Live-server for PHP with autoreload on save? | I tried to use the Live-Server extension on VS Code for PHP, but it only opened the "root" of the "served" project folder and showed the index.php as a downloadable file link.
Then I read about the Live-Server Web Extension and installed it, but it still did not work.
(Yes, I did enable the web extension inside Live-Server config settings in VS Code).
I've also tried to use the PHP Server extension, which does good job for serving the project, instead of using Apache in XAMPP, but I have not found a way to reload when saving.
Is there even a way to autoreload PHP on PHP Server?
Does the Live-Server Web Extension require something else other than the Live-Server installed in VS Code and enabling the web extension in Live-Server config settings?
I've seen that it works for some people in gifs/videos, but I didn't manage to work it out.
| [
"First of all, I want to tell you that Live Server which is available in visual code market place is the solution to your problem. It works mainly with the static webpage like HTML but also works with dynamic webpages like PHP, NodeJs and ASP.NET in a tricky way. In the following example I will guide you to install a live server which works with both webpages (static & dynamic).\n\nInstall PHP Server and Live Server from VS Code market place.\nCreate a PHP file, example index.php and place it in any sub-directory(say, demo) under /var/www/html/, like /var/www/html/demo/\nInstall the live server extension in the chrome browser and edit like this. \nNow click on the \"Go Live\" button in the VS Code. \n5.Now open the index.php file which is placed under /var/www/html/demo/ in VS Code and right click and select \"PHP Server: Reload Server\" then \"PHP Server: Open file in browser\".\nIn the Browser just open the IP address\n\n\nhttp://localhost:3000/demo/index.php\n\nyou will see that it is working in a live server with a dynamic webpage like PHP. When you edit and save the index.php file with the running VS Code, it will automatically updated on that IP address.\n",
"I was having a similar issue and I think I have found a workaround. With php server and live server installed, go to the web extension for live server and check \"I don't want a proxy setup\". For the actual server address put in your php server address (for me the default was http://localhost:3000/) and for live server address type in http://127.0.0.1:5500 if you kept the live server default address and port. In my settings.json I had \"liveServer.settings.useWebExt\" set to true, but setting it to false didn't make a difference for me for some reason.\nI couldn't figure out how to get it to open the php server address rather than the live server address when pressing \"go live\". It still brings up the directory structure and I think the issue lies with live server not being in the working directory of the php server, if that makes sense. However, if you go to the php server address (localhost:3000/) the php pages worked for me and pressing ctrl+s to save updated the page correctly.\nI hope this helps!\n",
"For having auto reload in PHP files in Visual Studio Code:\n\nInstall Live Server extension.\nInstall PHP Server extension.\nConfig PHP Server: (PHP Config Path), (PHP Path).\nInstall Google Chrome Live Server extensions.\nOpen your PHP file in Visual Studio Code and 'Click to run Lie Server'.\nCopy the opened page address and past it in \"Live Server Address\" in live server chrome extension and click Apply.\nSwitch again to Visual Studio Code and right click on your PHP file and click on \"PHP Server: Reload server\", it will open your PHP file in browser and just copy IP and port (for example: http://localhost:3000) and paste it on \"Actual Server Address\" in live server of Google Chrome extension and click apply.\nTurn On live Reload in live server of chrome extension.\nNow whenever you run your PHP file with \"PHP Server: Reload server\", it will reload automatically on each saving.\n\nNote: For having good experience of automatically reload active Autosave and set 400ms for its delay.\n",
"put this meta tag to your code that work for me <meta http-equiv=\"refresh\" content=\"10\">\n\n",
"This solution works for me but i had to install a local copy of php-7.4.24, and configure the PHP_Server extention to look at that, then I kept getting mysqli and curl errors until I edited the php.ini and added the full paths to the extensions in my local install of php7\nextension=\"C:\\php-7.4.24\\ext\\php_mysqli.dll\"\nextension=\"C:\\php-7.4.24\\ext\\php_curl.dll\"\njust using the following didn't work:\nextension=\"php_mysqli.dll\"\nextension=\"php_curl.dll\"\n",
"I did all from the first answer, but what I found is all I had to do was to\nInstall the Live Server plugin for Firefox and VSCode\nFor Laravel I ran php artisan serve --host 192.168.0.104 --port 8001\nGo in VSCode bottom right -> Go Live, then a page opens, I close it\nIn Firefox plugin I changed the links, Actual server address as http://192.168.0.104:8001 and Live server address as those pages that don't work, which was http://127.0.0.1:5500\n\nThat's it.\n",
"There is a improved version of live-server for vs-code, which is called five-server.\nThis extension does support PHP.\nYou have to place a fiveserver.config.js into the root directory of your project,\nthis could be an example configuration:\nmodule.exports = {\n php: \"/usr/bin/php\", //php executable\n root: 'www', //root directory of your project, where the liveserver looks\n open: 'index.php', //entrypoint of your php project\n injectBody: true //enable live reloading\n}\n\nAlso be sure to add a HTML tag into your index.php, otherwise it won't work (404 Error).\nA sample index.php which works for me:\n //index.php\n<!DOCTYPE html>\n<html lang=\"en\">\n \n <body>\n <?php\n echo \"The Force is strong with you\";\n ?>\n </body>\n \n</html>\n\n",
"After a lot of trial and error I got it working with these settings.\nClick to see imagen configuration\n"
] | [
54,
10,
5,
1,
0,
0,
0,
0
] | [
"Download the chrome extension-\nYou'll also have to add the live server extension on VS code\nActual server address = http://localhost/(insert_folder_name_here)/\nLive server address = http://127.0.0.1:5500/(insert_php_file_name).php\nUse a sample html file to find the Live server address if this doesn't work for you, OR\nOR Watch this video to setup it up\n"
] | [
-3
] | [
"php",
"visual_studio_code",
"xampp"
] | stackoverflow_0060678203_php_visual_studio_code_xampp.txt |
Q:
git diff-tree shows no output
I have read that the following command allows you to see all changed files of the last commit:
git diff-tree --no-commit-id --diff-filter=d --name-only -r $(Build.SourceVersion)
Unfortunately I have no luck, the command does not show anything.
How is that possible? I am currently on a branch called swagger-fix, so maybe the command is not able to see the branch?
Thank you for your help.
A:
torek's comment is spot on:
Carnac the Magnificent says: You're using a CI system and you've forgotten to turn off shallow clones in the CI system. Turn off shallow clones (or set the depth to be at least 2).
So make sure to include such crucial information in your future questions :)
The solution: unshallow your CI clone or turn of shallow clones altogether.
| git diff-tree shows no output | I have read that the following command allows you to see all changed files of the last commit:
git diff-tree --no-commit-id --diff-filter=d --name-only -r $(Build.SourceVersion)
Unfortunately I have no luck, the command does not show anything.
How is that possible? I am currently on a branch called swagger-fix, so maybe the command is not able to see the branch?
Thank you for your help.
| [
"torek's comment is spot on:\n\nCarnac the Magnificent says: You're using a CI system and you've forgotten to turn off shallow clones in the CI system. Turn off shallow clones (or set the depth to be at least 2).\n\nSo make sure to include such crucial information in your future questions :)\nThe solution: unshallow your CI clone or turn of shallow clones altogether.\n"
] | [
0
] | [] | [] | [
"git",
"git_diff",
"git_diff_tree"
] | stackoverflow_0074516920_git_git_diff_git_diff_tree.txt |
Q:
How to change the icon color of MUI TextField type time?
Is there a css way to change the color of of MUI TextField type time
I'm able to change the time but not the icon
https://stackblitz.com/edit/react-puyjkf-cbnpun?file=demo.js
A:
Try this is will work for you.
Demo.js:
<TextField
sx={{
'& input[type="time"]::-webkit-calendar-picker-indicator': {
filter:
'invert(78%) sepia(66%) saturate(6558%) hue-rotate(84deg) brightness(127%) contrast(116%)',
},
}}
type="time"
variant="outlined"
/>
| How to change the icon color of MUI TextField type time? | Is there a css way to change the color of of MUI TextField type time
I'm able to change the time but not the icon
https://stackblitz.com/edit/react-puyjkf-cbnpun?file=demo.js
| [
"Try this is will work for you.\nDemo.js:\n<TextField\n sx={{\n '& input[type=\"time\"]::-webkit-calendar-picker-indicator': {\n filter:\n 'invert(78%) sepia(66%) saturate(6558%) hue-rotate(84deg) brightness(127%) contrast(116%)',\n },\n }}\n type=\"time\"\n variant=\"outlined\"\n />\n\n"
] | [
1
] | [] | [] | [
"css",
"material_ui",
"reactjs"
] | stackoverflow_0074675857_css_material_ui_reactjs.txt |
Q:
Disabling Juniper Services
I currently am running a juniper firewall that I would like to disable some services that are not considered best practice. Does there happen to be any specific commands using Juniper's CLI to disable FTP and disable Telnet?
A:
To disable FTP on a Juniper firewall using the CLI, you can use the following command:
set system services ftp disable
To disable Telnet on a Juniper firewall using the CLI, you can use the following command:
set system services telnet disable
| Disabling Juniper Services | I currently am running a juniper firewall that I would like to disable some services that are not considered best practice. Does there happen to be any specific commands using Juniper's CLI to disable FTP and disable Telnet?
| [
"To disable FTP on a Juniper firewall using the CLI, you can use the following command:\nset system services ftp disable\n\nTo disable Telnet on a Juniper firewall using the CLI, you can use the following command:\nset system services telnet disable\n\n"
] | [
0
] | [] | [] | [
"devops",
"juniper",
"juniper_network_connect"
] | stackoverflow_0072840480_devops_juniper_juniper_network_connect.txt |
Q:
Comparing pointers to member functions
I want to store a pointer to member function in some kind of object. Later in the programm i want to compare it to another one.
Requirenment is that the type of object that holds first pointer must be concrete (i need to store them in one container)
I failed to apply Type-erasure here because template functions can not be virtual, so i can not provide an interface "compareWith" to be overridden.
There is a half-working solution i came up with, but the problem here is with the types of Fnc objects - those are different and can not be stored in the same container. I do need to store them in the same container.
Simplified version is below:
template<class T>
class Fnc
{
public:
Fnc(T&& fnc) : m_fnc(std::forward<T>(fnc)) {}
template<class Y>
bool compareWith(const Fnc<Y>& other) {
return other.m_fnc == m_fnc;
}
private:
T m_fnc;
};
class MyClass
{
public:
void method1(int a);
void method2(int a);
void method3();
}
int main() {
//This block obviously works because i didnt try to put Fnc-objects to container
Fnc fnc1(&MyClass::method1);
Fnc fnc2(&MyClass::method2);
printf("%d\n", fnc1.compareWith(Fnc(&MyClass::method1))); //prints "1"
printf("%d\n", fnc1.compareWith(Fnc(&MyClass::method3))); //prints "0"
printf("%d\n", fnc2.compareWith(Fnc(&MyClass::method2))); //prints "1"
//This block is not working because Fnc is not polymorphic(and cannot be - template functions can not be virtual)
std::vecotor<Fnc> methods;
methods.push_back(Fnc(&MyClass::method1));
methods.push_back(Fnc(&MyClass::method2));
methods.push_back(Fnc(&MyClass::method3));
printf("%d\n", methods[0].compareWith(Fnc(&MyClass::method1))); //should be "1"
printf("%d\n", methods[1].compareWith(Fnc(&MyClass::method2))); //should be "1"
printf("%d\n", methods[2].compareWith(Fnc(&MyClass::method3))); //should be "1"
printf("%d\n", methods[0].compareWith(Fnc(&MyClass::method2))); //should be "0"
}
A:
With a hint given by @joergbrech i made up some kind of hash that easily distinguishes one pointer to member function from another. And it works with overloads too.
The solution is very simple, yet may not be very robust. But it works nicely for me. The benefit is that it works with C++11 and doesnt require any fancy stuff :)
Working example:
//WARINING: sizeof(pointer-to-member-function) may not always be <= 8
//Consider changing the hash to be an array!
//I write it as uint64_t for the sake of ease.
template<typename R, class Class, class... Args>
uint64_t getId(R(Class::*ptr)(Args...)) {
uint64_t id = 0;
memcpy(&id, &ptr, sizeof(ptr));
return id;
}
class MyClass
{
public:
void test(int a) {
}
void test(double a) {
}
void test2(int a) {
}
};
int main() {
//Now every pointer-to-member-function may be simply represented by a number
//and compared at any time.
//static_cast is needed for functions that are overloaded if you wonder.
cout << getId(static_cast<void(MyClass::*)(int)>(&MyClass::test))
<< getId(static_cast<void(MyClass::*)(double)>(&MyClass::test))
<< getId(static_cast<void(MyClass::*)(double)>(&MyClass::test))
<< getId(&MyClass::test2);
//Output was: 4389464 4389304 4389304 4389616
// ^ != ^ == ^ != ^
return 0;
}
| Comparing pointers to member functions | I want to store a pointer to member function in some kind of object. Later in the programm i want to compare it to another one.
Requirenment is that the type of object that holds first pointer must be concrete (i need to store them in one container)
I failed to apply Type-erasure here because template functions can not be virtual, so i can not provide an interface "compareWith" to be overridden.
There is a half-working solution i came up with, but the problem here is with the types of Fnc objects - those are different and can not be stored in the same container. I do need to store them in the same container.
Simplified version is below:
template<class T>
class Fnc
{
public:
Fnc(T&& fnc) : m_fnc(std::forward<T>(fnc)) {}
template<class Y>
bool compareWith(const Fnc<Y>& other) {
return other.m_fnc == m_fnc;
}
private:
T m_fnc;
};
class MyClass
{
public:
void method1(int a);
void method2(int a);
void method3();
}
int main() {
//This block obviously works because i didnt try to put Fnc-objects to container
Fnc fnc1(&MyClass::method1);
Fnc fnc2(&MyClass::method2);
printf("%d\n", fnc1.compareWith(Fnc(&MyClass::method1))); //prints "1"
printf("%d\n", fnc1.compareWith(Fnc(&MyClass::method3))); //prints "0"
printf("%d\n", fnc2.compareWith(Fnc(&MyClass::method2))); //prints "1"
//This block is not working because Fnc is not polymorphic(and cannot be - template functions can not be virtual)
std::vecotor<Fnc> methods;
methods.push_back(Fnc(&MyClass::method1));
methods.push_back(Fnc(&MyClass::method2));
methods.push_back(Fnc(&MyClass::method3));
printf("%d\n", methods[0].compareWith(Fnc(&MyClass::method1))); //should be "1"
printf("%d\n", methods[1].compareWith(Fnc(&MyClass::method2))); //should be "1"
printf("%d\n", methods[2].compareWith(Fnc(&MyClass::method3))); //should be "1"
printf("%d\n", methods[0].compareWith(Fnc(&MyClass::method2))); //should be "0"
}
| [
"With a hint given by @joergbrech i made up some kind of hash that easily distinguishes one pointer to member function from another. And it works with overloads too.\nThe solution is very simple, yet may not be very robust. But it works nicely for me. The benefit is that it works with C++11 and doesnt require any fancy stuff :)\nWorking example:\n//WARINING: sizeof(pointer-to-member-function) may not always be <= 8\n//Consider changing the hash to be an array!\n//I write it as uint64_t for the sake of ease.\ntemplate<typename R, class Class, class... Args>\nuint64_t getId(R(Class::*ptr)(Args...)) {\n uint64_t id = 0;\n memcpy(&id, &ptr, sizeof(ptr));\n return id;\n}\n\nclass MyClass\n{\npublic:\n void test(int a) {\n }\n\n void test(double a) {\n }\n\n void test2(int a) {\n }\n};\n\nint main() {\n //Now every pointer-to-member-function may be simply represented by a number\n //and compared at any time.\n //static_cast is needed for functions that are overloaded if you wonder.\n cout << getId(static_cast<void(MyClass::*)(int)>(&MyClass::test))\n << getId(static_cast<void(MyClass::*)(double)>(&MyClass::test))\n << getId(static_cast<void(MyClass::*)(double)>(&MyClass::test))\n << getId(&MyClass::test2);\n\n //Output was: 4389464 4389304 4389304 4389616\n // ^ != ^ == ^ != ^\n return 0;\n}\n\n"
] | [
0
] | [] | [] | [
"c++",
"function",
"member",
"pointers",
"templates"
] | stackoverflow_0074675240_c++_function_member_pointers_templates.txt |
Q:
Entities and POJOs must have a usable public constructor
I am encountering this problem yet I have done everything right.
"Entities and POJOs must have a usable public constructor. You can have an empty constructor or a constructor whose parameters match the fields (by name and type). - kotlin.Unit"
my Entity code is this
@Entity(tableName = "note_table")
data class Note (
@PrimaryKey(autoGenerate = true)
val id: Int,
val name: String,
val email: String
)
An Update. The error was originating from my Dao class
The code was as follows
@Query("SELECT * FROM note_table ORDER BY ID ASC")
suspend fun getAllNotes(): LiveData<List<Note>>
A:
As @DarShan mentioned, the suspend keyword in the dao class shouldn't be there. Specifically if the function should return a liveData
| Entities and POJOs must have a usable public constructor | I am encountering this problem yet I have done everything right.
"Entities and POJOs must have a usable public constructor. You can have an empty constructor or a constructor whose parameters match the fields (by name and type). - kotlin.Unit"
my Entity code is this
@Entity(tableName = "note_table")
data class Note (
@PrimaryKey(autoGenerate = true)
val id: Int,
val name: String,
val email: String
)
An Update. The error was originating from my Dao class
The code was as follows
@Query("SELECT * FROM note_table ORDER BY ID ASC")
suspend fun getAllNotes(): LiveData<List<Note>>
| [
"As @DarShan mentioned, the suspend keyword in the dao class shouldn't be there. Specifically if the function should return a liveData\n"
] | [
0
] | [] | [] | [
"android",
"android_room",
"kotlin"
] | stackoverflow_0074670121_android_android_room_kotlin.txt |
Q:
JS arrays: merging together following items that starts by the same char (refer to the next loop in for loop)
Basically I have an item with HTML tags in it, the problem is that I have some instructions starting by an @ in this array too. Here's what the array looks like:
store = ["@if str == 'hello'", "<div>", "<h1>hello world</h1>", "</div>", "@else", "<p>Hello</p>"]
I want to group every HTML tags between the instruction items into one array item, resulted like this:
store = ["@if str == 'hello'", "<div><h1>hello world</h1></div>", "@else", "<p>Hello</p>"]
So I ended up with this code:
const store = ["@if str == 'hello'", "<div>", "<h1>hello world</h1>", "</div>", "@else", "<p>Hello</p>"]
const merged = [];
for (let i = 0; i < store.length; i++) {
if(store[i].slice(0, 1) == "@"){
merged.push(store[i]);
} else if (store[i].slice(0, 1) == "<") {
while (store[i+1] == "<") {
str = store[i] + store[i+1];
merged.push(str)
}
}
}
console.log(merged);
But it's working, I guess because of the "[i+1]" that I use on the store array.
A:
Have another variable store the accumulating element string, then push it when you it another condition.
const store = ["@if str == 'hello'", "<div>", "<h1>hello world</h1>", "</div>", "@else", "<p>Hello</p>"]
const merged = [];
let current = [];
for (let i = 0; i < store.length; i++) {
if(store[i].slice(0, 1) == "@"){
if(current.length) merged.push(current.join(''))
current = []
merged.push(store[i]);
} else if (store[i].slice(0, 1) == "<") {
current.push(store[i])
}
}
console.log(merged);
| JS arrays: merging together following items that starts by the same char (refer to the next loop in for loop) | Basically I have an item with HTML tags in it, the problem is that I have some instructions starting by an @ in this array too. Here's what the array looks like:
store = ["@if str == 'hello'", "<div>", "<h1>hello world</h1>", "</div>", "@else", "<p>Hello</p>"]
I want to group every HTML tags between the instruction items into one array item, resulted like this:
store = ["@if str == 'hello'", "<div><h1>hello world</h1></div>", "@else", "<p>Hello</p>"]
So I ended up with this code:
const store = ["@if str == 'hello'", "<div>", "<h1>hello world</h1>", "</div>", "@else", "<p>Hello</p>"]
const merged = [];
for (let i = 0; i < store.length; i++) {
if(store[i].slice(0, 1) == "@"){
merged.push(store[i]);
} else if (store[i].slice(0, 1) == "<") {
while (store[i+1] == "<") {
str = store[i] + store[i+1];
merged.push(str)
}
}
}
console.log(merged);
But it's working, I guess because of the "[i+1]" that I use on the store array.
| [
"Have another variable store the accumulating element string, then push it when you it another condition.\n\n\nconst store = [\"@if str == 'hello'\", \"<div>\", \"<h1>hello world</h1>\", \"</div>\", \"@else\", \"<p>Hello</p>\"]\nconst merged = [];\nlet current = [];\n\nfor (let i = 0; i < store.length; i++) {\n if(store[i].slice(0, 1) == \"@\"){\n if(current.length) merged.push(current.join(''))\n current = []\n merged.push(store[i]);\n } else if (store[i].slice(0, 1) == \"<\") {\n current.push(store[i])\n }\n}\nconsole.log(merged);\n\n\n\n"
] | [
0
] | [] | [] | [
"arrays",
"for_loop",
"javascript",
"loops",
"merge"
] | stackoverflow_0074675741_arrays_for_loop_javascript_loops_merge.txt |
Q:
Is the a way to regenerate data for 500 times?
library(MASS)
# set seed and create data vectors
#set.seed(98989) <- for replicating results of betas in 1-2 1-3
sample_size <- 200
sample_meanvector <- c(3, 4)
sample_covariance_matrix <- matrix(c(2, 1, 1, 2),
ncol = 2)
# create bivariate normal distribution
sample_distribution <- mvrnorm(n = sample_size,
mu = sample_meanvector,
Sigma = sample_covariance_matrix)
#Convert the datatype
df_sample_distribution <- as.data.frame(sample_distribution)
Is there a way to put this entire chunk of code in a loop and regenerate it for 500 times? Would be even better if i can store them somewhere.
A:
You might use replicate()
library(MASS)
out <- replicate(3, simplify = FALSE, {sample_size <- 200
sample_meanvector <- c(3, 4)
sample_covariance_matrix <- matrix(c(2, 1, 1, 2),
ncol = 2)
# create bivariate normal distribution
sample_distribution <- mvrnorm(n = sample_size,
mu = sample_meanvector,
Sigma = sample_covariance_matrix)
#Convert the datatype
df_sample_distribution <- as.data.frame(sample_distribution)
head(df_sample_distribution) # for shorter output
})
out
#> [[1]]
#> V1 V2
#> 1 3.195478 4.393699
#> 2 2.553590 5.065685
#> 3 2.822811 2.389559
#> 4 2.267116 4.076016
#> 5 1.659459 3.830608
#> 6 1.377554 4.009023
#>
#> [[2]]
#> V1 V2
#> 1 2.8850139 3.107203
#> 2 3.0313680 5.163229
#> 3 3.8649482 4.594017
#> 4 3.2747060 4.085805
#> 5 -0.1640264 3.628542
#> 6 3.6504855 4.747372
#>
#> [[3]]
#> V1 V2
#> 1 1.3230817 4.075396
#> 2 3.6049470 6.293968
#> 3 6.1211276 7.673592
#> 4 5.2955379 6.736665
#> 5 0.9032304 2.606501
#> 6 3.6034566 3.880563
Created on 2022-12-04 with reprex v2.0.2
A:
Yes, you can put the code in a loop and generate the sample data 500 times. Here is an example of how you can do that:
# set the number of iterations
num_iterations <- 500
# create an empty list to store the generated data
generated_data <- list()
# loop through the number of iterations
for (i in 1:num_iterations) {
# set the seed
set.seed(i)
# create the sample data using the mvrnorm function
sample_distribution <- mvrnorm(n = sample_size,
mu = sample_meanvector,
Sigma = sample_covariance_matrix)
# convert the data to a data frame
df_sample_distribution <- as.data.frame(sample_distribution)
# store the generated data in the list
generated_data[[i]] <- df_sample_distribution
}
# you can access the generated data using the list index, for example:
generated_data[[1]] # will return the first generated data
You can also store the generated data in a data frame by using the rbind function to combine the data frames in the list into a single data frame. Here is an example of how you can do that:
# create an empty data frame to store the generated data
generated_data_df <- data.frame()
# loop through the generated data list
for (i in 1:num_iterations) {
# bind the data frame at the current index to the generated data data frame
generated_data_df <- rbind(generated_data_df, generated_data[[i]])
}
# generated_data_df will now contain all the generated data
Alternatively, you can use the do.call and rbind functions to combine the data frames in the list into a single data frame in a single step, like this:
# create the data frame using the do.call and rbind functions
generated_data_df <- do.call(rbind, generated_data)
A:
Yes, you can use a for loop to generate the data multiple times. Here is an example:
# set seed and create data vectors
set.seed(98989)
sample_size <- 200
sample_meanvector <- c(3, 4)
sample_covariance_matrix <- matrix(c(2, 1, 1, 2),
ncol = 2)
# create a list to store the data frames
df_list <- list()
# loop to generate the data
for (i in 1:500) {
# create bivariate normal distribution
sample_distribution <- mvrnorm(n = sample_size,
mu = sample_meanvector,
Sigma = sample_covariance_matrix)
# Convert the data type
df_sample_distribution <- as.data.frame(sample_distribution)
# add the data frame to the list
df_list[[i]] <- df_sample_distribution
}
This code will generate 500 data frames, each containing the bivariate normal distribution data. The data frames will be stored in the df_list list. You can access each data frame by indexing the list, for example df_list[[1]] will give you the first data frame.
| Is the a way to regenerate data for 500 times? | library(MASS)
# set seed and create data vectors
#set.seed(98989) <- for replicating results of betas in 1-2 1-3
sample_size <- 200
sample_meanvector <- c(3, 4)
sample_covariance_matrix <- matrix(c(2, 1, 1, 2),
ncol = 2)
# create bivariate normal distribution
sample_distribution <- mvrnorm(n = sample_size,
mu = sample_meanvector,
Sigma = sample_covariance_matrix)
#Convert the datatype
df_sample_distribution <- as.data.frame(sample_distribution)
Is there a way to put this entire chunk of code in a loop and regenerate it for 500 times? Would be even better if i can store them somewhere.
| [
"You might use replicate()\nlibrary(MASS)\nout <- replicate(3, simplify = FALSE, {sample_size <- 200 \n sample_meanvector <- c(3, 4) \n sample_covariance_matrix <- matrix(c(2, 1, 1, 2),\n ncol = 2)\n \n # create bivariate normal distribution\n sample_distribution <- mvrnorm(n = sample_size,\n mu = sample_meanvector, \n Sigma = sample_covariance_matrix)\n #Convert the datatype\n df_sample_distribution <- as.data.frame(sample_distribution)\n\n head(df_sample_distribution) # for shorter output\n })\n\nout\n#> [[1]]\n#> V1 V2\n#> 1 3.195478 4.393699\n#> 2 2.553590 5.065685\n#> 3 2.822811 2.389559\n#> 4 2.267116 4.076016\n#> 5 1.659459 3.830608\n#> 6 1.377554 4.009023\n#> \n#> [[2]]\n#> V1 V2\n#> 1 2.8850139 3.107203\n#> 2 3.0313680 5.163229\n#> 3 3.8649482 4.594017\n#> 4 3.2747060 4.085805\n#> 5 -0.1640264 3.628542\n#> 6 3.6504855 4.747372\n#> \n#> [[3]]\n#> V1 V2\n#> 1 1.3230817 4.075396\n#> 2 3.6049470 6.293968\n#> 3 6.1211276 7.673592\n#> 4 5.2955379 6.736665\n#> 5 0.9032304 2.606501\n#> 6 3.6034566 3.880563\n\nCreated on 2022-12-04 with reprex v2.0.2\n",
"Yes, you can put the code in a loop and generate the sample data 500 times. Here is an example of how you can do that:\n# set the number of iterations\nnum_iterations <- 500\n\n# create an empty list to store the generated data\ngenerated_data <- list()\n\n# loop through the number of iterations\nfor (i in 1:num_iterations) {\n # set the seed\n set.seed(i)\n \n # create the sample data using the mvrnorm function\n sample_distribution <- mvrnorm(n = sample_size,\n mu = sample_meanvector, \n Sigma = sample_covariance_matrix)\n \n # convert the data to a data frame\n df_sample_distribution <- as.data.frame(sample_distribution)\n \n # store the generated data in the list\n generated_data[[i]] <- df_sample_distribution\n}\n\n# you can access the generated data using the list index, for example:\ngenerated_data[[1]] # will return the first generated data\n\nYou can also store the generated data in a data frame by using the rbind function to combine the data frames in the list into a single data frame. Here is an example of how you can do that:\n# create an empty data frame to store the generated data\ngenerated_data_df <- data.frame()\n\n# loop through the generated data list\nfor (i in 1:num_iterations) {\n # bind the data frame at the current index to the generated data data frame\n generated_data_df <- rbind(generated_data_df, generated_data[[i]])\n}\n\n# generated_data_df will now contain all the generated data\n\nAlternatively, you can use the do.call and rbind functions to combine the data frames in the list into a single data frame in a single step, like this:\n# create the data frame using the do.call and rbind functions\ngenerated_data_df <- do.call(rbind, generated_data)\n\n",
"Yes, you can use a for loop to generate the data multiple times. Here is an example:\n# set seed and create data vectors\nset.seed(98989)\nsample_size <- 200 \nsample_meanvector <- c(3, 4) \nsample_covariance_matrix <- matrix(c(2, 1, 1, 2),\n ncol = 2)\n\n# create a list to store the data frames\ndf_list <- list()\n\n# loop to generate the data\nfor (i in 1:500) {\n # create bivariate normal distribution\n sample_distribution <- mvrnorm(n = sample_size,\n mu = sample_meanvector, \n Sigma = sample_covariance_matrix)\n # Convert the data type\n df_sample_distribution <- as.data.frame(sample_distribution)\n # add the data frame to the list\n df_list[[i]] <- df_sample_distribution\n}\n\nThis code will generate 500 data frames, each containing the bivariate normal distribution data. The data frames will be stored in the df_list list. You can access each data frame by indexing the list, for example df_list[[1]] will give you the first data frame.\n"
] | [
2,
0,
0
] | [] | [] | [
"mass",
"normal_distribution",
"r"
] | stackoverflow_0074675944_mass_normal_distribution_r.txt |
Q:
Fetch data from HTML by net/html
I saw that the performance of net/html is 2-3 times faster than GoQuery and I want to rewrite the parser module on it. Need to get the data underlined in the screenshot. Now result is "nil"
resp, err := http.Get(link)
if err != nil {
fmt.Println(err)
}
var result string
doc, err := html.Parse(resp.Body)
resp.Body.Close()
if err != nil {
fmt.Println(err)
}
if doc.Type == html.ElementNode && doc.Data == "profile-data__count-number" {
for _, a := range doc.Attr {
if a.Key == "em" {
result = a.Val
}
}
fmt.Println(result)
A:
Note that class is an HTML attribute, so looking for class="profile-data__count-number" in Node.Data will never result in a match. You should instead look for it in Node.Attr.
And if I'm not mistaken the tokenizer ought to be even less wasteful than the parser, so doing something like the following should give you a little bit better performance, I think:
func getCounts(respBody io.Reader) (counts []string, err error) {
z := html.NewTokenizer(respBody)
for {
if z.Next() == html.ErrorToken {
if err := z.Err(); err != io.EOF {
return nil, err
}
break
}
t := z.Token()
if t.Type == html.StartTagToken && t.Data == "div" {
for _, a := range t.Attr {
if a.Key == "class" && a.Val == "profile-data__count-number" {
if z.Next() == html.TextToken {
counts = append(counts, z.Token().Data)
break
}
}
}
}
}
return counts, nil
}
| Fetch data from HTML by net/html | I saw that the performance of net/html is 2-3 times faster than GoQuery and I want to rewrite the parser module on it. Need to get the data underlined in the screenshot. Now result is "nil"
resp, err := http.Get(link)
if err != nil {
fmt.Println(err)
}
var result string
doc, err := html.Parse(resp.Body)
resp.Body.Close()
if err != nil {
fmt.Println(err)
}
if doc.Type == html.ElementNode && doc.Data == "profile-data__count-number" {
for _, a := range doc.Attr {
if a.Key == "em" {
result = a.Val
}
}
fmt.Println(result)
| [
"Note that class is an HTML attribute, so looking for class=\"profile-data__count-number\" in Node.Data will never result in a match. You should instead look for it in Node.Attr.\nAnd if I'm not mistaken the tokenizer ought to be even less wasteful than the parser, so doing something like the following should give you a little bit better performance, I think:\nfunc getCounts(respBody io.Reader) (counts []string, err error) {\n z := html.NewTokenizer(respBody)\n for {\n if z.Next() == html.ErrorToken {\n if err := z.Err(); err != io.EOF {\n return nil, err\n } \n break\n }\n\n t := z.Token()\n if t.Type == html.StartTagToken && t.Data == \"div\" {\n for _, a := range t.Attr {\n if a.Key == \"class\" && a.Val == \"profile-data__count-number\" {\n if z.Next() == html.TextToken {\n counts = append(counts, z.Token().Data)\n break\n }\n }\n }\n }\n }\n return counts, nil\n}\n\n"
] | [
0
] | [] | [] | [
"go",
"html",
"parsing",
"web_scraping"
] | stackoverflow_0074675281_go_html_parsing_web_scraping.txt |
Q:
Eigen: vector of linear system solver
I'm working with the Eigen linear algebra library and need a vector of BiCGSTAB-solvers. Unfortunately, extending this vector is extremely difficult. The minimal (not) working example is
#include <Eigen/Eigen>
int main() {
std::vector< Eigen::BiCGSTAB< Eigen::SparseMatrix< double > > > tmp;
tmp.emplace_back();
}
and yields the error message
$ g++ -I/usr/include/eigen3 main.cpp
In file included from /usr/include/c++/12.2.0/vector:63,
from /usr/include/c++/12.2.0/functional:62,
from /usr/include/eigen3/Eigen/Core:85,
from /usr/include/eigen3/Eigen/Dense:1,
from /usr/include/eigen3/Eigen/Eigen:1,
from main.cpp:1:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h: In instantiation of ‘constexpr bool std::__check_constructible() [with _ValueType = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&&]’:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:182:4: required from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:372:37: required from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:397:2: required from ‘_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Allocator = allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >]’
/usr/include/c++/12.2.0/bits/vector.tcc:487:3: required from ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; iterator = std::vector<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::iterator]’
/usr/include/c++/12.2.0/bits/vector.tcc:123:21: required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; reference = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&]’
main.cpp:5:21: required from here
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: error: static assertion failed: result type must be constructible from input type
90 | static_assert(is_constructible<_ValueType, _Tp>::value,
| ^~~~~
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: note: ‘std::integral_constant<bool, false>::value’ evaluates to false
Trying to std::move is worse, i.e.
#include <Eigen/Eigen>
#include <utility>
int main() {
std::vector< Eigen::BiCGSTAB< Eigen::SparseMatrix< double > > > tmp;
Eigen::BiCGSTAB< Eigen::SparseMatrix< double > > solver;
tmp.push_back( std::move( solver ) );
}
leads to the error message
g++ -I/usr/include/eigen3 main.cpp
In file included from /usr/include/c++/12.2.0/x86_64-pc-linux-gnu/bits/c++allocator.h:33,
from /usr/include/c++/12.2.0/bits/allocator.h:46,
from /usr/include/c++/12.2.0/string:41,
from /usr/include/c++/12.2.0/bits/locale_classes.h:40,
from /usr/include/c++/12.2.0/bits/ios_base.h:41,
from /usr/include/c++/12.2.0/ios:42,
from /usr/include/c++/12.2.0/istream:38,
from /usr/include/c++/12.2.0/sstream:38,
from /usr/include/c++/12.2.0/complex:45,
from /usr/include/eigen3/Eigen/Core:50,
from /usr/include/eigen3/Eigen/Dense:1,
from /usr/include/eigen3/Eigen/Eigen:1,
from main.cpp:1:
/usr/include/c++/12.2.0/bits/new_allocator.h: In instantiation of ‘void std::__new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’:
/usr/include/c++/12.2.0/bits/alloc_traits.h:516:17: required from ‘static void std::allocator_traits<std::allocator<_CharT> >::construct(allocator_type&, _Up*, _Args&& ...) [with _Up = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; allocator_type = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >]’
/usr/include/c++/12.2.0/bits/vector.tcc:117:30: required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; reference = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&]’
/usr/include/c++/12.2.0/bits/stl_vector.h:1294:21: required from ‘void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; value_type = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
main.cpp:9:18: required from here
/usr/include/c++/12.2.0/bits/new_allocator.h:175:11: error: use of deleted function ‘Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >::BiCGSTAB(const Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&)’
175 | { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/IterativeLinearSolvers:42,
from /usr/include/eigen3/Eigen/Sparse:31,
from /usr/include/eigen3/Eigen/Eigen:2:
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h:158:7: note: ‘Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >::BiCGSTAB(const Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&)’ is implicitly deleted because the default definition would be ill-formed:
158 | class BiCGSTAB : public IterativeSolverBase<BiCGSTAB<_MatrixType,_Preconditioner> >
| ^~~~~~~~
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h:158:7: error: use of deleted function ‘Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::IterativeSolverBase(const Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’
In file included from /usr/include/eigen3/Eigen/IterativeLinearSolvers:38:
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h:143:7: note: ‘Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::IterativeSolverBase(const Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’ is implicitly deleted because the default definition would be ill-formed:
143 | class IterativeSolverBase : public SparseSolverBase<Derived>
| ^~~~~~~~~~~~~~~~~~~
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h:143:7: error: use of deleted function ‘Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::SparseSolverBase(const Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’
In file included from /usr/include/eigen3/Eigen/SparseCore:64,
from /usr/include/eigen3/Eigen/Sparse:26:
/usr/include/eigen3/Eigen/src/SparseCore/SparseSolverBase.h:67:7: note: ‘Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::SparseSolverBase(const Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’ is implicitly deleted because the default definition would be ill-formed:
67 | class SparseSolverBase : internal::noncopyable
| ^~~~~~~~~~~~~~~~
/usr/include/eigen3/Eigen/src/SparseCore/SparseSolverBase.h:67:7: error: ‘Eigen::internal::noncopyable::noncopyable(const Eigen::internal::noncopyable&)’ is private within this context
In file included from /usr/include/eigen3/Eigen/Core:162:
/usr/include/eigen3/Eigen/src/Core/util/Meta.h:424:21: note: declared private here
424 | EIGEN_DEVICE_FUNC noncopyable(const noncopyable&);
| ^~~~~~~~~~~
In file included from /usr/include/c++/12.2.0/vector:63,
from /usr/include/c++/12.2.0/functional:62,
from /usr/include/eigen3/Eigen/Core:85:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h: In instantiation of ‘constexpr bool std::__check_constructible() [with _ValueType = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&&]’:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:182:4: required from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:372:37: required from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:397:2: required from ‘_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Allocator = allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >]’
/usr/include/c++/12.2.0/bits/vector.tcc:487:3: required from ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; iterator = std::vector<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::iterator]’
/usr/include/c++/12.2.0/bits/vector.tcc:123:21: required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; reference = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&]’
/usr/include/c++/12.2.0/bits/stl_vector.h:1294:21: required from ‘void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; value_type = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
main.cpp:9:18: required from here
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: error: static assertion failed: result type must be constructible from input type
90 | static_assert(is_constructible<_ValueType, _Tp>::value,
| ^~~~~
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: note: ‘std::integral_constant<bool, false>::value’ evaluates to false
I'm using Eigen 3.4 and g++ version 12.2.
Any ideas how to fix this?
A:
Turning my comments into a proper answer:
After looking at the code, I found that BiCGSTAB like all solvers inherits from a base class designed to prevent copying, and by extension moving, too: class SparseSolverBase : internal::noncopyable
The exact reasons for this design choice I cannot tell. If I had to guess, I'd say some solvers probably use self-referential attributes (holding pointers to other members) which would break especially with fixed-size matrices. Or using Eigen::Map may cause issues on copy, especially copy-assignment.
std::vector only works with moveable types as it needs to move when it reallocates. Even when calling reserve() beforehand, the code still needs to compile, even if it is never executed.
Three workarounds come to mind:
Use std::deque. It provides a superset of all methods that vector has but its implementation means that as long as you only call emplace_back or emplace_front and not e.g. insert, it does not need moveable types. The downside is that it is a bit slower on all individual accesses
Use std::vector<std::unique_ptr<Solver>>. Less efficient than the deque but now you can also insert, reshuffle, etc.
Use std::unique_ptr<Solver[]> and use the good old new Solver[count] allocation. Starting with C++14, you can use std::make_unique<Solver[]>(count). This has the least overhead, even less than vector but the interface isn't as nice (you can use the [index] operator but the pointer doesn't even know the array size) and the number is fixed after allocation
| Eigen: vector of linear system solver | I'm working with the Eigen linear algebra library and need a vector of BiCGSTAB-solvers. Unfortunately, extending this vector is extremely difficult. The minimal (not) working example is
#include <Eigen/Eigen>
int main() {
std::vector< Eigen::BiCGSTAB< Eigen::SparseMatrix< double > > > tmp;
tmp.emplace_back();
}
and yields the error message
$ g++ -I/usr/include/eigen3 main.cpp
In file included from /usr/include/c++/12.2.0/vector:63,
from /usr/include/c++/12.2.0/functional:62,
from /usr/include/eigen3/Eigen/Core:85,
from /usr/include/eigen3/Eigen/Dense:1,
from /usr/include/eigen3/Eigen/Eigen:1,
from main.cpp:1:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h: In instantiation of ‘constexpr bool std::__check_constructible() [with _ValueType = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&&]’:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:182:4: required from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:372:37: required from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:397:2: required from ‘_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Allocator = allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >]’
/usr/include/c++/12.2.0/bits/vector.tcc:487:3: required from ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; iterator = std::vector<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::iterator]’
/usr/include/c++/12.2.0/bits/vector.tcc:123:21: required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; reference = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&]’
main.cpp:5:21: required from here
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: error: static assertion failed: result type must be constructible from input type
90 | static_assert(is_constructible<_ValueType, _Tp>::value,
| ^~~~~
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: note: ‘std::integral_constant<bool, false>::value’ evaluates to false
Trying to std::move is worse, i.e.
#include <Eigen/Eigen>
#include <utility>
int main() {
std::vector< Eigen::BiCGSTAB< Eigen::SparseMatrix< double > > > tmp;
Eigen::BiCGSTAB< Eigen::SparseMatrix< double > > solver;
tmp.push_back( std::move( solver ) );
}
leads to the error message
g++ -I/usr/include/eigen3 main.cpp
In file included from /usr/include/c++/12.2.0/x86_64-pc-linux-gnu/bits/c++allocator.h:33,
from /usr/include/c++/12.2.0/bits/allocator.h:46,
from /usr/include/c++/12.2.0/string:41,
from /usr/include/c++/12.2.0/bits/locale_classes.h:40,
from /usr/include/c++/12.2.0/bits/ios_base.h:41,
from /usr/include/c++/12.2.0/ios:42,
from /usr/include/c++/12.2.0/istream:38,
from /usr/include/c++/12.2.0/sstream:38,
from /usr/include/c++/12.2.0/complex:45,
from /usr/include/eigen3/Eigen/Core:50,
from /usr/include/eigen3/Eigen/Dense:1,
from /usr/include/eigen3/Eigen/Eigen:1,
from main.cpp:1:
/usr/include/c++/12.2.0/bits/new_allocator.h: In instantiation of ‘void std::__new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’:
/usr/include/c++/12.2.0/bits/alloc_traits.h:516:17: required from ‘static void std::allocator_traits<std::allocator<_CharT> >::construct(allocator_type&, _Up*, _Args&& ...) [with _Up = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; allocator_type = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >]’
/usr/include/c++/12.2.0/bits/vector.tcc:117:30: required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; reference = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&]’
/usr/include/c++/12.2.0/bits/stl_vector.h:1294:21: required from ‘void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; value_type = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
main.cpp:9:18: required from here
/usr/include/c++/12.2.0/bits/new_allocator.h:175:11: error: use of deleted function ‘Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >::BiCGSTAB(const Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&)’
175 | { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/IterativeLinearSolvers:42,
from /usr/include/eigen3/Eigen/Sparse:31,
from /usr/include/eigen3/Eigen/Eigen:2:
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h:158:7: note: ‘Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >::BiCGSTAB(const Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&)’ is implicitly deleted because the default definition would be ill-formed:
158 | class BiCGSTAB : public IterativeSolverBase<BiCGSTAB<_MatrixType,_Preconditioner> >
| ^~~~~~~~
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h:158:7: error: use of deleted function ‘Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::IterativeSolverBase(const Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’
In file included from /usr/include/eigen3/Eigen/IterativeLinearSolvers:38:
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h:143:7: note: ‘Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::IterativeSolverBase(const Eigen::IterativeSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’ is implicitly deleted because the default definition would be ill-formed:
143 | class IterativeSolverBase : public SparseSolverBase<Derived>
| ^~~~~~~~~~~~~~~~~~~
/usr/include/eigen3/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h:143:7: error: use of deleted function ‘Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::SparseSolverBase(const Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’
In file included from /usr/include/eigen3/Eigen/SparseCore:64,
from /usr/include/eigen3/Eigen/Sparse:26:
/usr/include/eigen3/Eigen/src/SparseCore/SparseSolverBase.h:67:7: note: ‘Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::SparseSolverBase(const Eigen::SparseSolverBase<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >&)’ is implicitly deleted because the default definition would be ill-formed:
67 | class SparseSolverBase : internal::noncopyable
| ^~~~~~~~~~~~~~~~
/usr/include/eigen3/Eigen/src/SparseCore/SparseSolverBase.h:67:7: error: ‘Eigen::internal::noncopyable::noncopyable(const Eigen::internal::noncopyable&)’ is private within this context
In file included from /usr/include/eigen3/Eigen/Core:162:
/usr/include/eigen3/Eigen/src/Core/util/Meta.h:424:21: note: declared private here
424 | EIGEN_DEVICE_FUNC noncopyable(const noncopyable&);
| ^~~~~~~~~~~
In file included from /usr/include/c++/12.2.0/vector:63,
from /usr/include/c++/12.2.0/functional:62,
from /usr/include/eigen3/Eigen/Core:85:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h: In instantiation of ‘constexpr bool std::__check_constructible() [with _ValueType = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&&]’:
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:182:4: required from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:372:37: required from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*>; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:397:2: required from ‘_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _ForwardIterator = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >*; _Allocator = allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >]’
/usr/include/c++/12.2.0/bits/vector.tcc:487:3: required from ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; iterator = std::vector<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >::iterator]’
/usr/include/c++/12.2.0/bits/vector.tcc:123:21: required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {Eigen::BiCGSTAB<Eigen::SparseMatrix<double, 0, int>, Eigen::DiagonalPreconditioner<double> >}; _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; reference = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >&]’
/usr/include/c++/12.2.0/bits/stl_vector.h:1294:21: required from ‘void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >; _Alloc = std::allocator<Eigen::BiCGSTAB<Eigen::SparseMatrix<double> > >; value_type = Eigen::BiCGSTAB<Eigen::SparseMatrix<double> >]’
main.cpp:9:18: required from here
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: error: static assertion failed: result type must be constructible from input type
90 | static_assert(is_constructible<_ValueType, _Tp>::value,
| ^~~~~
/usr/include/c++/12.2.0/bits/stl_uninitialized.h:90:56: note: ‘std::integral_constant<bool, false>::value’ evaluates to false
I'm using Eigen 3.4 and g++ version 12.2.
Any ideas how to fix this?
| [
"Turning my comments into a proper answer:\nAfter looking at the code, I found that BiCGSTAB like all solvers inherits from a base class designed to prevent copying, and by extension moving, too: class SparseSolverBase : internal::noncopyable\nThe exact reasons for this design choice I cannot tell. If I had to guess, I'd say some solvers probably use self-referential attributes (holding pointers to other members) which would break especially with fixed-size matrices. Or using Eigen::Map may cause issues on copy, especially copy-assignment.\nstd::vector only works with moveable types as it needs to move when it reallocates. Even when calling reserve() beforehand, the code still needs to compile, even if it is never executed.\nThree workarounds come to mind:\n\nUse std::deque. It provides a superset of all methods that vector has but its implementation means that as long as you only call emplace_back or emplace_front and not e.g. insert, it does not need moveable types. The downside is that it is a bit slower on all individual accesses\n\nUse std::vector<std::unique_ptr<Solver>>. Less efficient than the deque but now you can also insert, reshuffle, etc.\n\nUse std::unique_ptr<Solver[]> and use the good old new Solver[count] allocation. Starting with C++14, you can use std::make_unique<Solver[]>(count). This has the least overhead, even less than vector but the interface isn't as nice (you can use the [index] operator but the pointer doesn't even know the array size) and the number is fixed after allocation\n\n\n"
] | [
1
] | [] | [] | [
"c++",
"eigen",
"eigen3",
"vector"
] | stackoverflow_0074672400_c++_eigen_eigen3_vector.txt |
Q:
Xcode 12 IPHONEOS_DEPLOYMENT_TARGET warning for SPM dependencies
After updating to Xcode 12, I've got lots of warnings for SPM dependencies (including RxSwift and Facebook).
The iOS Simulator deployment target 'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is 9.0 to 14.0.99.
Can I suppress these warnings somehow, or is the only way to wait till the creators of appropriate frameworks fixed it?
A:
The warnings that you are seeing are caused by a change in Xcode 12, which now requires the deployment target to be set to a minimum of iOS 9.0. This is because Apple has dropped support for iOS 8.0 and earlier in Xcode 12, so any dependencies that are built using an older deployment target will not be compatible with the new version of Xcode.
To suppress these warnings, you will need to update the deployment target for your dependencies to a minimum of iOS 9.0. This can typically be done by modifying the deployment_target value in the dependency's Package.swift file, or by specifying the deployment target using the swift_tools_version attribute in the dependency's Package.swift file.
Alternatively, you can wait for the creators of the dependencies to update their frameworks to use a minimum deployment target of iOS 9.0, which will resolve the warnings.
I hope this helps!
| Xcode 12 IPHONEOS_DEPLOYMENT_TARGET warning for SPM dependencies | After updating to Xcode 12, I've got lots of warnings for SPM dependencies (including RxSwift and Facebook).
The iOS Simulator deployment target 'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is 9.0 to 14.0.99.
Can I suppress these warnings somehow, or is the only way to wait till the creators of appropriate frameworks fixed it?
| [
"The warnings that you are seeing are caused by a change in Xcode 12, which now requires the deployment target to be set to a minimum of iOS 9.0. This is because Apple has dropped support for iOS 8.0 and earlier in Xcode 12, so any dependencies that are built using an older deployment target will not be compatible with the new version of Xcode.\nTo suppress these warnings, you will need to update the deployment target for your dependencies to a minimum of iOS 9.0. This can typically be done by modifying the deployment_target value in the dependency's Package.swift file, or by specifying the deployment target using the swift_tools_version attribute in the dependency's Package.swift file.\nAlternatively, you can wait for the creators of the dependencies to update their frameworks to use a minimum deployment target of iOS 9.0, which will resolve the warnings.\nI hope this helps!\n"
] | [
0
] | [] | [] | [
"swift5",
"swift_package_manager",
"xcode",
"xcode12"
] | stackoverflow_0063951616_swift5_swift_package_manager_xcode_xcode12.txt |
Q:
Why is the vocab size of Byte level BPE smaller than Unicode's vocab size?
I recently read GPT2 and the paper says:
This would result in a base vocabulary of over 130,000 before any multi-symbol tokens are added. This is prohibitively large compared to the 32,000 to 64,000 token vocabularies often used with BPE. In contrast, a byte-level version of BPE only requires a base vocabulary of size 256.
I really don't understand the words. The number of characters that Unicode represents is 130K but how can this be reduced to 256? Where's the rest of approximately 129K characters? What am I missing? Does byte-level BPE allow duplicating of representation between different characters?
I don't understand the logic. Below are my questions:
Why the size of vocab is reduced? (from 130K to 256)
What's the logic of the BBPE (Byte-level BPE)?
Detail question
Thank you for your answer but I really don't get it. Let's say we have 130K unique characters. What we want (and BBPE do) is to reduce this basic (unique) vocabulary. Each Unicode character can be converted 1 to 4 bytes by utilizing UTF-8 encoding. The original paper of BBPE says (Neural Machine Translation with Byte-Level Subwords):
Representing text at the level of bytes and using the 256 bytes set as vocabulary is a potential solution to this issue.
Each byte can represent 256 characters (bits, 2^8), we only need 2^17 (131072) bits for representing the unique Unicode characters. In this case, where did the 256 bytes in the original paper come from? I don't know both the logic and how to derive this result.
I arrange my questions again, more detail:
How does BBPE work?
Why the size of vocab is reduced? (from 130K to 256 bytes)
Anyway, we always need 130K space for a vocab. What's the difference between representing unique characters as Unicode and Bytes?
Since I have little knowledge of computer architecture and programming, please let me know if there's something I missed.
Sincerely, thank you.
A:
Unicode code points are integers in the range 0..1,114,112, of which roughly 130k are in use at the moment. Every Unicode code point corresponds to a character, like "a" or "λ" or "龙", which is handy to work with in many cases (but there are a lot of complicated details, eg. combining marks).
When you save text data to a file, you use one of the UTFs (UTF-8, UTF-16, UTF-32) to convert code points (integers) to bytes. For UTF-8 (the most popular file encoding), each character is represented by 1, 2, 3, or 4 bytes (there's some inner logic to discriminate single- and multi-byte characters).
So when the base vocabulary are bytes, this means that rare characters will be encoded with multiple BPE segments.
Example
Let's consider a short example sentence like “That’s great ”.
With a base vocabulary of all Unicode characters, the BPE model starts off with something like this:
T 54
h 68
a 61
t 74
’ 2019
s 73
20
g 67
r 72
e 65
a 61
t 74
20
1F44D
(The first column is the character, the second its codepoint in hexadecimal notation.)
If you first encode this sentence with UTF-8, then this sequence of bytes is fed to BPE instead:
T 54
h 68
a 61
t 74
� e2
� 80
� 99
s 73
20
g 67
r 72
e 65
a 61
t 74
20
� f0
� 9f
� 91
� 8d
The typographic apostrophe "’" and the thumbs-up emoji are represented by multiple bytes.
With either input, the BPE segmentation (after training) may end with something like this:
Th|at|’s|great|
(This is a hypothetical segmentation, but it's possible that capitalised “That“ is too rare to be represented as a single segment.)
The number of BPE operations is different though: to arrive at the segment ’s, only one merge step is required for code-point input, but three steps for byte input.
With byte input, the BPE segmentation is likely to end up with sub-character segments for rare characters.
The down-stream language model will have to learn to deal with that kind of input.
A:
So you already know the BPE right Byte-level BPE is an improvisation of how the base vocabulary is defined. Recall, there is 1,43,859 unicode characters in unicode alphabets, but wonder how the gpt-2 vocabulary size is just 50,257. Having a base vocabulary of 1.4L will increase the size even more during the training process(where we will combine frequent occuring unicode characters).
To solve this issue GPT-2 uses a byte-level process which has a base vocabulary of just 256 characters using which any unicode characters can be represented by either a single or multiple byte-level characters. I still dont know the process of how a unicode character is converted to byte-level representation.
Does this explanation gave you a clarity why we go to a byte-level representation. Once again gpt-2 uses this 256 base vocabulary and increase the vocabulary size by adding frequent co occuring characters.
| Why is the vocab size of Byte level BPE smaller than Unicode's vocab size? | I recently read GPT2 and the paper says:
This would result in a base vocabulary of over 130,000 before any multi-symbol tokens are added. This is prohibitively large compared to the 32,000 to 64,000 token vocabularies often used with BPE. In contrast, a byte-level version of BPE only requires a base vocabulary of size 256.
I really don't understand the words. The number of characters that Unicode represents is 130K but how can this be reduced to 256? Where's the rest of approximately 129K characters? What am I missing? Does byte-level BPE allow duplicating of representation between different characters?
I don't understand the logic. Below are my questions:
Why the size of vocab is reduced? (from 130K to 256)
What's the logic of the BBPE (Byte-level BPE)?
Detail question
Thank you for your answer but I really don't get it. Let's say we have 130K unique characters. What we want (and BBPE do) is to reduce this basic (unique) vocabulary. Each Unicode character can be converted 1 to 4 bytes by utilizing UTF-8 encoding. The original paper of BBPE says (Neural Machine Translation with Byte-Level Subwords):
Representing text at the level of bytes and using the 256 bytes set as vocabulary is a potential solution to this issue.
Each byte can represent 256 characters (bits, 2^8), we only need 2^17 (131072) bits for representing the unique Unicode characters. In this case, where did the 256 bytes in the original paper come from? I don't know both the logic and how to derive this result.
I arrange my questions again, more detail:
How does BBPE work?
Why the size of vocab is reduced? (from 130K to 256 bytes)
Anyway, we always need 130K space for a vocab. What's the difference between representing unique characters as Unicode and Bytes?
Since I have little knowledge of computer architecture and programming, please let me know if there's something I missed.
Sincerely, thank you.
| [
"Unicode code points are integers in the range 0..1,114,112, of which roughly 130k are in use at the moment. Every Unicode code point corresponds to a character, like \"a\" or \"λ\" or \"龙\", which is handy to work with in many cases (but there are a lot of complicated details, eg. combining marks).\nWhen you save text data to a file, you use one of the UTFs (UTF-8, UTF-16, UTF-32) to convert code points (integers) to bytes. For UTF-8 (the most popular file encoding), each character is represented by 1, 2, 3, or 4 bytes (there's some inner logic to discriminate single- and multi-byte characters).\nSo when the base vocabulary are bytes, this means that rare characters will be encoded with multiple BPE segments.\nExample\nLet's consider a short example sentence like “That’s great ”.\nWith a base vocabulary of all Unicode characters, the BPE model starts off with something like this:\nT 54\nh 68\na 61\nt 74\n’ 2019\ns 73\n 20\ng 67\nr 72\ne 65\na 61\nt 74\n 20\n 1F44D\n\n(The first column is the character, the second its codepoint in hexadecimal notation.)\nIf you first encode this sentence with UTF-8, then this sequence of bytes is fed to BPE instead:\nT 54\nh 68\na 61\nt 74\n� e2\n� 80\n� 99\ns 73\n 20\ng 67\nr 72\ne 65\na 61\nt 74\n 20\n� f0\n� 9f\n� 91\n� 8d\n\nThe typographic apostrophe \"’\" and the thumbs-up emoji are represented by multiple bytes.\nWith either input, the BPE segmentation (after training) may end with something like this:\nTh|at|’s|great|\n\n(This is a hypothetical segmentation, but it's possible that capitalised “That“ is too rare to be represented as a single segment.)\nThe number of BPE operations is different though: to arrive at the segment ’s, only one merge step is required for code-point input, but three steps for byte input.\nWith byte input, the BPE segmentation is likely to end up with sub-character segments for rare characters.\nThe down-stream language model will have to learn to deal with that kind of input.\n",
"So you already know the BPE right Byte-level BPE is an improvisation of how the base vocabulary is defined. Recall, there is 1,43,859 unicode characters in unicode alphabets, but wonder how the gpt-2 vocabulary size is just 50,257. Having a base vocabulary of 1.4L will increase the size even more during the training process(where we will combine frequent occuring unicode characters).\nTo solve this issue GPT-2 uses a byte-level process which has a base vocabulary of just 256 characters using which any unicode characters can be represented by either a single or multiple byte-level characters. I still dont know the process of how a unicode character is converted to byte-level representation.\nDoes this explanation gave you a clarity why we go to a byte-level representation. Once again gpt-2 uses this 256 base vocabulary and increase the vocabulary size by adding frequent co occuring characters.\n"
] | [
2,
0
] | [] | [] | [
"nlp",
"unicode",
"utf_8"
] | stackoverflow_0066193575_nlp_unicode_utf_8.txt |
Q:
How can I bind FocusOut and Button-2 to a button?
There are many questions about binding 2 functions to an event or binding Ctrl+Key, space+Key to a button,
but I need to know how I can bind FocusOut + Button-2 to a button.
It may seem weird that I need it, but I do.
So my scenario is that after I open the widget, I will click somewhere else outside the widget to read the source from the existing browser, blah blah...
So how can I acheive this?
I did the below.
import tkinter as tk
def test(event):
print('test')
root = tk.Tk()
root.bind('<FocusOut><Button-2>', test) # root.bind('<space>w', test) this works by the way.
root.mainloop()
Of course I did many other combinations but to no avail.
A:
Does this help? You can't put root.bind in one line.
Try this:
import tkinter as tk
def test(event):
print('test')
root = tk.Tk()
root.bind("<FocusIn>", test)
root.bind("<Button-2>", test)
root.mainloop()
| How can I bind FocusOut and Button-2 to a button? | There are many questions about binding 2 functions to an event or binding Ctrl+Key, space+Key to a button,
but I need to know how I can bind FocusOut + Button-2 to a button.
It may seem weird that I need it, but I do.
So my scenario is that after I open the widget, I will click somewhere else outside the widget to read the source from the existing browser, blah blah...
So how can I acheive this?
I did the below.
import tkinter as tk
def test(event):
print('test')
root = tk.Tk()
root.bind('<FocusOut><Button-2>', test) # root.bind('<space>w', test) this works by the way.
root.mainloop()
Of course I did many other combinations but to no avail.
| [
"Does this help? You can't put root.bind in one line.\nTry this:\nimport tkinter as tk\n\ndef test(event):\n print('test')\n\nroot = tk.Tk()\n\nroot.bind(\"<FocusIn>\", test)\nroot.bind(\"<Button-2>\", test)\nroot.mainloop()\n\n"
] | [
0
] | [] | [] | [
"mouse",
"mouseevent",
"python",
"tkinter"
] | stackoverflow_0074674338_mouse_mouseevent_python_tkinter.txt |
Q:
ProseMirror -> A Text Rich Text Editor - Trigger an event pure js
I am implementing a chrome extension that fills a form basically.
This form has ProseMirror rich text editor in it.
I want to trigger Ctrl+V or paste operation on the text editor, but I couldn’t find any solution to this. these are the things I’ve tried so far:
let el = document.querySelector('[contenteditable="true"]')
el.focus()
navigator.clipboard.readText().then((clipText) => {
// el.innerText = clipText doesn't work
// el.innerHTML = clipText doesnt't work
// this.document.execCommand("insertText", false, clipText) doesnt work
})
document.dispatchEvent(new KeyboardEvent({key:'V', ctrlKey: true})) // doesnt work
When I do the pasting operation manually, the prose mirror component automatically converts it to a pretty table.
If you want to try -> copy a table then paste it here https://prosemirror-tables.netlify.app/
How do I trigger the paste event so it would look like as expected case?
Eventhough, this case is related to prose mirror, we may consider this problem for other rich text editors as well.
If I just copy an image and paste it into a rich text editor, picture will be uploaded but it won't work If I try to paste it programmatically
Additional screenshot to be clearer
It look like this If I paste it manually:
A:
I've tested the following code in Chromium 107.0.5304.121 (Official Build) Arch Linux (64-Bit), and it does what you require.
Notes:
Without the "clipboardRead" permission, document.execCommand("paste") doesn't paste the clipboard contents, and returns false.
The while loop removes all text from the div, so that it only contains the clipboard contents when the content script has finished.
The div element must be focused to receive the clipboard contents during the paste operation, that's why element.focus(); is necessary.
Without the zero-millisecond delay, the removed child nodes re-appear after the paste. This probably has something to do with the task queue.
Document.execCommand() is deprecated, so you should avoid using it if possible. Before you ask "Is there any other way to solve this problem, without using Document.execCommand() ?" --- Not to my knowledge, or I would've posted it here. But maybe someone else has a better solution.
You could probably use Native Messaging to simulate the user pressing Ctrl-V. That's more complicated, but doesn't use any deprecated features.
manifest.json
{
"manifest_version": 3,
"name": "Paste",
"version": "1.0",
"action": {
},
"background": {
"service_worker": "background.js"
},
"content_scripts": [
{
"matches": ["*://*/*"],
"js": ["content_script.js"]
}
],
"permissions": [ "clipboardRead" ]
}
background.js
console.log("background.js");
content_script.js
let element = document.querySelector('[contenteditable="true"]');
if (element) {
console.log("element.hasChildNodes()", element.hasChildNodes());
while (element.hasChildNodes()) {
element.removeChild(element.lastChild)
}
element.focus();
setTimeout(() => {
let result = document.execCommand("paste");
console.log("result", result);
}, 0);
}
else {
console.log('No elements with contenteditable="true"');
}
| ProseMirror -> A Text Rich Text Editor - Trigger an event pure js | I am implementing a chrome extension that fills a form basically.
This form has ProseMirror rich text editor in it.
I want to trigger Ctrl+V or paste operation on the text editor, but I couldn’t find any solution to this. these are the things I’ve tried so far:
let el = document.querySelector('[contenteditable="true"]')
el.focus()
navigator.clipboard.readText().then((clipText) => {
// el.innerText = clipText doesn't work
// el.innerHTML = clipText doesnt't work
// this.document.execCommand("insertText", false, clipText) doesnt work
})
document.dispatchEvent(new KeyboardEvent({key:'V', ctrlKey: true})) // doesnt work
When I do the pasting operation manually, the prose mirror component automatically converts it to a pretty table.
If you want to try -> copy a table then paste it here https://prosemirror-tables.netlify.app/
How do I trigger the paste event so it would look like as expected case?
Eventhough, this case is related to prose mirror, we may consider this problem for other rich text editors as well.
If I just copy an image and paste it into a rich text editor, picture will be uploaded but it won't work If I try to paste it programmatically
Additional screenshot to be clearer
It look like this If I paste it manually:
| [
"I've tested the following code in Chromium 107.0.5304.121 (Official Build) Arch Linux (64-Bit), and it does what you require.\nNotes:\n\nWithout the \"clipboardRead\" permission, document.execCommand(\"paste\") doesn't paste the clipboard contents, and returns false.\nThe while loop removes all text from the div, so that it only contains the clipboard contents when the content script has finished.\nThe div element must be focused to receive the clipboard contents during the paste operation, that's why element.focus(); is necessary.\nWithout the zero-millisecond delay, the removed child nodes re-appear after the paste. This probably has something to do with the task queue.\nDocument.execCommand() is deprecated, so you should avoid using it if possible. Before you ask \"Is there any other way to solve this problem, without using Document.execCommand() ?\" --- Not to my knowledge, or I would've posted it here. But maybe someone else has a better solution.\nYou could probably use Native Messaging to simulate the user pressing Ctrl-V. That's more complicated, but doesn't use any deprecated features.\n\nmanifest.json\n{\n \"manifest_version\": 3,\n \"name\": \"Paste\",\n \"version\": \"1.0\",\n \"action\": {\n },\n \"background\": {\n \"service_worker\": \"background.js\"\n },\n \"content_scripts\": [\n {\n \"matches\": [\"*://*/*\"],\n \"js\": [\"content_script.js\"]\n }\n ],\n \"permissions\": [ \"clipboardRead\" ]\n}\n\nbackground.js\nconsole.log(\"background.js\");\n\ncontent_script.js\nlet element = document.querySelector('[contenteditable=\"true\"]');\nif (element) {\n console.log(\"element.hasChildNodes()\", element.hasChildNodes());\n while (element.hasChildNodes()) {\n element.removeChild(element.lastChild)\n }\n element.focus();\n setTimeout(() => {\n let result = document.execCommand(\"paste\");\n console.log(\"result\", result);\n }, 0);\n}\nelse {\n console.log('No elements with contenteditable=\"true\"');\n}\n\n"
] | [
0
] | [] | [] | [
"google_chrome_extension",
"javascript",
"prose_mirror"
] | stackoverflow_0074674949_google_chrome_extension_javascript_prose_mirror.txt |
Q:
How can I stop Visual Studio from breaking on Exceptions when I have already swallowed them?
I'm developing an application inside Visual Studio using C#. This is a simple Console application.
In some places, I wish to swallow exceptions. Thus I use an empty catch block. It' by design.
When I hit F5, in codes of the try block of that catch block, when exceptions raise Visual Studio breaks on them.
This behavior is very annoying and reduces our debugging speed. I want those exceptions to not break at all.
How can I do that?
I searched the Options menu and I found nothing.
A:
Go to Debug > Windows > Exception Settings
A new window will open
Uncheck Common Language Runtime Exceptions
A:
Maybe start the programme without debugging and if your Try block is in a Loop then you should write a continue in the catch block
| How can I stop Visual Studio from breaking on Exceptions when I have already swallowed them? | I'm developing an application inside Visual Studio using C#. This is a simple Console application.
In some places, I wish to swallow exceptions. Thus I use an empty catch block. It' by design.
When I hit F5, in codes of the try block of that catch block, when exceptions raise Visual Studio breaks on them.
This behavior is very annoying and reduces our debugging speed. I want those exceptions to not break at all.
How can I do that?
I searched the Options menu and I found nothing.
| [
"Go to Debug > Windows > Exception Settings\nA new window will open\n\nUncheck Common Language Runtime Exceptions\n",
"Maybe start the programme without debugging and if your Try block is in a Loop then you should write a continue in the catch block\n"
] | [
3,
0
] | [] | [] | [
"c#"
] | stackoverflow_0074675978_c#.txt |
Q:
Is [email protected] removed from npm? It is not getting installed
While installing packages using npm on server, I am getting 404 error related to this module, [email protected]. Can anyone help me resolve this?
I tried to install it separately too, but still got errors related to 404.
Here's the logs:
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/eslint
npm ERR! dev eslint@"6.0.1" from the root project
npm ERR! peer eslint@"2.x - 6.x" from [email protected]
npm ERR! node_modules/eslint-plugin-import
npm ERR! dev eslint-plugin-import@"2.18.0" from the root project
npm ERR! peer eslint-plugin-import@"^2.14.0" from [email protected]
npm ERR! node_modules/eslint-config-airbnb-base
npm ERR! dev eslint-config-airbnb-base@"13.1.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer eslint@"^4.19.1 || ^5.3.0" from [email protected]
npm ERR! node_modules/eslint-config-airbnb-base
npm ERR! dev eslint-config-airbnb-base@"13.1.0" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: [email protected]
npm ERR! node_modules/eslint
npm ERR! peer eslint@"^4.19.1 || ^5.3.0" from [email protected]
npm ERR! node_modules/eslint-config-airbnb-base
npm ERR! dev eslint-config-airbnb-base@"13.1.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /root/.npm/eresolve-report.txt for a full report.
A:
It looks like the issue is with conflicting peer dependencies for eslint. The eslint-config-airbnb-base package requires a version of eslint that is either ^4.19.1 or ^5.3.0, but you have eslint version 6.0.1 installed.
To resolve this, you can try one of the following options:
Upgrade eslint-config-airbnb-base to a version that is compatible with your installed version of eslint by running the following command:
npm install eslint-config-airbnb-base@latest
Downgrade your installed version of eslint to a version that is compatible with eslint-config-airbnb-base by running the following command:
npm install eslint@"^4.19.1 || ^5.3.0"
Use the --force or --legacy-peer-deps flag when installing eslint-config-airbnb-base to bypass the peer dependency checks and install the package anyway. This may result in a broken or incorrect dependency resolution, so it should only be used as a last resort.
npm install [email protected] --force
npm install [email protected] --legacy-peer-deps
Once you have resolved the conflicting peer dependencies, you should be able to install eslint-config-airbnb-base without any errors.
| Is [email protected] removed from npm? It is not getting installed | While installing packages using npm on server, I am getting 404 error related to this module, [email protected]. Can anyone help me resolve this?
I tried to install it separately too, but still got errors related to 404.
Here's the logs:
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/eslint
npm ERR! dev eslint@"6.0.1" from the root project
npm ERR! peer eslint@"2.x - 6.x" from [email protected]
npm ERR! node_modules/eslint-plugin-import
npm ERR! dev eslint-plugin-import@"2.18.0" from the root project
npm ERR! peer eslint-plugin-import@"^2.14.0" from [email protected]
npm ERR! node_modules/eslint-config-airbnb-base
npm ERR! dev eslint-config-airbnb-base@"13.1.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer eslint@"^4.19.1 || ^5.3.0" from [email protected]
npm ERR! node_modules/eslint-config-airbnb-base
npm ERR! dev eslint-config-airbnb-base@"13.1.0" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: [email protected]
npm ERR! node_modules/eslint
npm ERR! peer eslint@"^4.19.1 || ^5.3.0" from [email protected]
npm ERR! node_modules/eslint-config-airbnb-base
npm ERR! dev eslint-config-airbnb-base@"13.1.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /root/.npm/eresolve-report.txt for a full report.
| [
"It looks like the issue is with conflicting peer dependencies for eslint. The eslint-config-airbnb-base package requires a version of eslint that is either ^4.19.1 or ^5.3.0, but you have eslint version 6.0.1 installed.\nTo resolve this, you can try one of the following options:\nUpgrade eslint-config-airbnb-base to a version that is compatible with your installed version of eslint by running the following command:\nnpm install eslint-config-airbnb-base@latest\n\nDowngrade your installed version of eslint to a version that is compatible with eslint-config-airbnb-base by running the following command:\nnpm install eslint@\"^4.19.1 || ^5.3.0\"\n\nUse the --force or --legacy-peer-deps flag when installing eslint-config-airbnb-base to bypass the peer dependency checks and install the package anyway. This may result in a broken or incorrect dependency resolution, so it should only be used as a last resort.\nnpm install [email protected] --force\n\n\n\nnpm install [email protected] --legacy-peer-deps\n\nOnce you have resolved the conflicting peer dependencies, you should be able to install eslint-config-airbnb-base without any errors.\n"
] | [
0
] | [] | [] | [
"devops",
"node.js",
"npm_install",
"server"
] | stackoverflow_0074667513_devops_node.js_npm_install_server.txt |
Q:
How to build logic for this SQL question?
A ski resort company is planning to construct a new ski slope using a pre-existing network of mountain huts and trails between them. A new slope has to begin at one of the mountain huts, have a middle station at another hut connected with the first one by a direct trail and end at the third mountain hut which is also connected by a direct trail to the second hut. The altitude of the three huts chosen for constructing the ski slop has to strictly decreasing.
You are given two tables:
create table mountains_huts ( id integer not null, name archer(40) not null, altitude integer not null, unique(name), unique(id);
create table trails (hut1 integer not null, hut 2 integer not null);
Each entry in the table trails represents a direct connection between huts with IDS hut1 and hut2. All trails are bidirectional.
Create a query that finds all triplets (startpt, midpt,endpt) representing the mountain huts that maybe used for construction of a ski slope.
Given the tables:
mountain_huts
id
name
altitude
1
Dakonat
1900
2
Natisa
2100
3
Gajantut
1600
4
Rifat
782
5
Tupur
1370
trails
hut1
hut2
1
3
3
2
3
5
4
5
1
5
This was one of the questions on my test. I am completely lost with the approach to solving this. I used lead functions to organise data in (start, mid, end) but could not exhaust all combinations.
A:
You can use the following two-step approach:
Order your huts in trails table in a way so hut1 altitude is always larger than hut2 altitude
Use self-join on the table from step 1 above to find mid-points
WITH ordered_huts AS -- ordering huts by altitude
(
SELECT CASE WHEN h1.altitude > h2.altitude THEN h1.id ELSE h2.id END AS hut1
,CASE WHEN h1.altitude > h2.altitude THEN h2.id ELSE h1.id END AS hut2
,CASE WHEN h1.altitude > h2.altitude THEN h1.altitude ELSE h2.altitude END AS hut1_altitude
,CASE WHEN h1.altitude > h2.altitude THEN h2.altitude ELSE h1.altitude END AS hut2_altitude
FROM trails
LEFT JOIN mountain_huts as h1 -- getting altitude for hut1
ON trails.hut1 = h1.id
LEFT JOIN mountain_huts as h2 -- getting altitude for hut2
ON trails.hut2 = h2.id
)
SELECT startpt.hut1 AS startpt -- starting point
,midpt.hut1 AS midpt -- mid-point
,midpt.hut2 AS endpt -- end-point
FROM ordered_huts AS startpt
JOIN ordered_huts as midpt
ON startpt.hut2 = midpt.hut1 -- finding all of the mid-points
Assumption here is that you only consider exactly one mid-point (i.e no direct connections and no multi-hops).
This is T-SQL dialect.
| How to build logic for this SQL question? | A ski resort company is planning to construct a new ski slope using a pre-existing network of mountain huts and trails between them. A new slope has to begin at one of the mountain huts, have a middle station at another hut connected with the first one by a direct trail and end at the third mountain hut which is also connected by a direct trail to the second hut. The altitude of the three huts chosen for constructing the ski slop has to strictly decreasing.
You are given two tables:
create table mountains_huts ( id integer not null, name archer(40) not null, altitude integer not null, unique(name), unique(id);
create table trails (hut1 integer not null, hut 2 integer not null);
Each entry in the table trails represents a direct connection between huts with IDS hut1 and hut2. All trails are bidirectional.
Create a query that finds all triplets (startpt, midpt,endpt) representing the mountain huts that maybe used for construction of a ski slope.
Given the tables:
mountain_huts
id
name
altitude
1
Dakonat
1900
2
Natisa
2100
3
Gajantut
1600
4
Rifat
782
5
Tupur
1370
trails
hut1
hut2
1
3
3
2
3
5
4
5
1
5
This was one of the questions on my test. I am completely lost with the approach to solving this. I used lead functions to organise data in (start, mid, end) but could not exhaust all combinations.
| [
"You can use the following two-step approach:\n\nOrder your huts in trails table in a way so hut1 altitude is always larger than hut2 altitude\nUse self-join on the table from step 1 above to find mid-points\n\nWITH ordered_huts AS -- ordering huts by altitude\n(\n SELECT CASE WHEN h1.altitude > h2.altitude THEN h1.id ELSE h2.id END AS hut1\n ,CASE WHEN h1.altitude > h2.altitude THEN h2.id ELSE h1.id END AS hut2\n ,CASE WHEN h1.altitude > h2.altitude THEN h1.altitude ELSE h2.altitude END AS hut1_altitude\n ,CASE WHEN h1.altitude > h2.altitude THEN h2.altitude ELSE h1.altitude END AS hut2_altitude\n FROM trails\n LEFT JOIN mountain_huts as h1 -- getting altitude for hut1\n ON trails.hut1 = h1.id\n LEFT JOIN mountain_huts as h2 -- getting altitude for hut2\n ON trails.hut2 = h2.id\n)\n\nSELECT startpt.hut1 AS startpt -- starting point\n ,midpt.hut1 AS midpt -- mid-point\n ,midpt.hut2 AS endpt -- end-point\nFROM ordered_huts AS startpt\nJOIN ordered_huts as midpt\n ON startpt.hut2 = midpt.hut1 -- finding all of the mid-points\n\nAssumption here is that you only consider exactly one mid-point (i.e no direct connections and no multi-hops).\nThis is T-SQL dialect.\n"
] | [
0
] | [] | [] | [
"common_table_expression",
"interview",
"lag",
"logic",
"sql"
] | stackoverflow_0074673784_common_table_expression_interview_lag_logic_sql.txt |
Q:
Azure Data Storage: Unable to upload file (Not authorized to perform this operation using this permission)
I'm trying to follow the example to upload a file to Azure Data Storage as mentioned in the documentation : https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet?tabs=visual-studio%2Cmanaged-identity%2Croles-azure-portal%2Csign-in-azure-cli%2Cidentity-visual-studio
Following is my code:
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System;
using System.IO;
using Azure.Identity;
// TODO: Replace <storage-account-name> with your actual storage account name
var blobServiceClient = new BlobServiceClient(
new Uri("https://[some azure storage]"),
new DefaultAzureCredential());
// Set container name
string containerName = "data";
// Get container
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient(containerName);
// Create a local file in the ./data/ directory for uploading and downloading
string localPath = "data";
Directory.CreateDirectory(localPath);
string fileName = "testupload" + Guid.NewGuid().ToString() + ".txt";
string localFilePath = Path.Combine(localPath, fileName);
// Write text to the file
await File.WriteAllTextAsync(localFilePath, "Hello, World!");
// Get a reference to a blob
BlobClient blobClient = containerClient.GetBlobClient(fileName);
Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);
// Upload data from the local file
await blobClient.UploadAsync(localFilePath, true);
But I'm getting an error message that the request is not authorized.
Error message:
Azure.RequestFailedException: 'This request is not authorized to perform this operation using this permission.
I have Contributor role (which based on description is Grant full access to manage all resources ....), is this role still not enough to perform the operation?
A:
Make sure you have to change the Network Access to Enable Public to All, if you're not using VPN or dedicated Network to access Azure Environment.
| Azure Data Storage: Unable to upload file (Not authorized to perform this operation using this permission) | I'm trying to follow the example to upload a file to Azure Data Storage as mentioned in the documentation : https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet?tabs=visual-studio%2Cmanaged-identity%2Croles-azure-portal%2Csign-in-azure-cli%2Cidentity-visual-studio
Following is my code:
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System;
using System.IO;
using Azure.Identity;
// TODO: Replace <storage-account-name> with your actual storage account name
var blobServiceClient = new BlobServiceClient(
new Uri("https://[some azure storage]"),
new DefaultAzureCredential());
// Set container name
string containerName = "data";
// Get container
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient(containerName);
// Create a local file in the ./data/ directory for uploading and downloading
string localPath = "data";
Directory.CreateDirectory(localPath);
string fileName = "testupload" + Guid.NewGuid().ToString() + ".txt";
string localFilePath = Path.Combine(localPath, fileName);
// Write text to the file
await File.WriteAllTextAsync(localFilePath, "Hello, World!");
// Get a reference to a blob
BlobClient blobClient = containerClient.GetBlobClient(fileName);
Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);
// Upload data from the local file
await blobClient.UploadAsync(localFilePath, true);
But I'm getting an error message that the request is not authorized.
Error message:
Azure.RequestFailedException: 'This request is not authorized to perform this operation using this permission.
I have Contributor role (which based on description is Grant full access to manage all resources ....), is this role still not enough to perform the operation?
| [
"Make sure you have to change the Network Access to Enable Public to All, if you're not using VPN or dedicated Network to access Azure Environment.\n"
] | [
0
] | [] | [] | [
"azure",
"azure_storage"
] | stackoverflow_0074636089_azure_azure_storage.txt |
Q:
How this code compiles without return statement in c?
How this code compiles even though i have not written return in else section?
#include <stdio.h>
int fibo(int n,int a,int b)
{
int x;
if(n==1)
printf("%d\n",b);
else
fibo(n-1,a+b,a);//Here
}
int main()
{
int num;
scanf("%d",&num);
fibo(num,1,1);
return 0;
}
or
#include <stdio.h>
int fibo(int n,int a,int b)
{
int x;
if(n==1)
return b;
else
fibo(n-1,a+b,a);//Here
}
int main()
{
int num;
scanf("%d",&num);
printf("%d",fibo(num,1,1));
return 0;
}
I tried many compilers still it returns 13 for input 7.Let's forget about compilation for second,then also how i am getting 13 (in second code) because 13 is returned to parent fibo function and parent fibo function is not returning to its parent,then how in main function value 13 is returned.
A:
Your fibo() function is defined as returning an int, but doesn't actually return any values at all. It prints something, but that's a different thing entirely from returning a value.
C allows a function with a non-void return type to not return anything if and only if the return value is ignored where the function is called, which you're doing.
If you instead had
int val = fibo(num, 1, 1);
your code would then exhibit undefined behavior and all sorts of weird things could happen.
| How this code compiles without return statement in c? | How this code compiles even though i have not written return in else section?
#include <stdio.h>
int fibo(int n,int a,int b)
{
int x;
if(n==1)
printf("%d\n",b);
else
fibo(n-1,a+b,a);//Here
}
int main()
{
int num;
scanf("%d",&num);
fibo(num,1,1);
return 0;
}
or
#include <stdio.h>
int fibo(int n,int a,int b)
{
int x;
if(n==1)
return b;
else
fibo(n-1,a+b,a);//Here
}
int main()
{
int num;
scanf("%d",&num);
printf("%d",fibo(num,1,1));
return 0;
}
I tried many compilers still it returns 13 for input 7.Let's forget about compilation for second,then also how i am getting 13 (in second code) because 13 is returned to parent fibo function and parent fibo function is not returning to its parent,then how in main function value 13 is returned.
| [
"Your fibo() function is defined as returning an int, but doesn't actually return any values at all. It prints something, but that's a different thing entirely from returning a value.\nC allows a function with a non-void return type to not return anything if and only if the return value is ignored where the function is called, which you're doing.\nIf you instead had\nint val = fibo(num, 1, 1);\n\nyour code would then exhibit undefined behavior and all sorts of weird things could happen.\n"
] | [
1
] | [] | [] | [
"c",
"recursion"
] | stackoverflow_0074675920_c_recursion.txt |
Q:
How to have type-safety dependent on a number? (like generics are dependent on a Type)
I want a Type that is "for" a certain number, and another Type for another number. But I don't want to have to manually define a Type for each number like Level1024 and Level1000. I want it to be simple to instantiate an instance of the Level class for each number, like we can do with generics where we can create a Level<string> and a Level<int> without needing to define a separate Level for each of them.
Here's the idea:
Level<1024> topPlayerOf1K;
Level<1000> Abe = new Level<1000>();
topPlayerOf1K = Abe; //This should show a squiggly line in Visual Studio.
How can I achieve that or something like that?
A:
Numbers literals are not considered types in C# like they are in TypeScript, and cannot be used as generic parameters like template parameters in C++.
At the minimum you would have to create types for each of the number literals you want to use. The approach could look like this:
interface IConstantInt { int Value { get; } }
class ConstantInt1000 : IConstantInt { public int Value => 1000; }
class ConstantInt1024 : IConstantInt { public int Value => 1024; }
class Level<TConstantInt> where TConstantInt : IConstantInt { }
var level1000 = new Level<ConstantInt1000>();
var level1024 = new Level<ConstantInt1024>();
It would be good to autogenerate this code if you're going to have many of those. This is not a great solution, but without knowing more about your program and what kind of errors you're trying to prevent, in the abstract, that's a way that you could encode number literals in the type system.
| How to have type-safety dependent on a number? (like generics are dependent on a Type) | I want a Type that is "for" a certain number, and another Type for another number. But I don't want to have to manually define a Type for each number like Level1024 and Level1000. I want it to be simple to instantiate an instance of the Level class for each number, like we can do with generics where we can create a Level<string> and a Level<int> without needing to define a separate Level for each of them.
Here's the idea:
Level<1024> topPlayerOf1K;
Level<1000> Abe = new Level<1000>();
topPlayerOf1K = Abe; //This should show a squiggly line in Visual Studio.
How can I achieve that or something like that?
| [
"Numbers literals are not considered types in C# like they are in TypeScript, and cannot be used as generic parameters like template parameters in C++.\nAt the minimum you would have to create types for each of the number literals you want to use. The approach could look like this:\ninterface IConstantInt { int Value { get; } }\n\nclass ConstantInt1000 : IConstantInt { public int Value => 1000; }\nclass ConstantInt1024 : IConstantInt { public int Value => 1024; }\n\nclass Level<TConstantInt> where TConstantInt : IConstantInt { }\n\nvar level1000 = new Level<ConstantInt1000>();\nvar level1024 = new Level<ConstantInt1024>();\n\nIt would be good to autogenerate this code if you're going to have many of those. This is not a great solution, but without knowing more about your program and what kind of errors you're trying to prevent, in the abstract, that's a way that you could encode number literals in the type system.\n"
] | [
1
] | [] | [] | [
".net",
"c#",
"generics",
"type_safety"
] | stackoverflow_0074675858_.net_c#_generics_type_safety.txt |
Q:
Entity Framework Core, retrieve data from second level related table with aggregation function
I have a situation that I'd like to try that doesn't let me sleep at night. in essence, I would like to display cards (bootstrap type) in a View which, for each person (it's just an example for analogy), show the associated characteristics and the value of the last measurement (for example, imagining that the height was measured at 5 years and then at 10 years ). In a situation similar to that of the image below, it is a best practice to adopt the only logic that I was able to implement (sorry but I am a hobbyist who loves programming, unfortunately I have not studied for that) reported in the code below or Is it better to use Partial View? In the second case, how are they implemented for the second level or better, how is the value of the last date associated with each characteristic?
What I would get: https://codepen.io/lucora/pen/jOKQpGY
And this is my approach:
VIEW MODEL
public class ViewModel
{
public IEnumerable<PersonViewModel>? People { get; set; }
public IEnumerable<FeaturesViewModel>? Features { get; set; }
}
public class PersonViewModel
{
public int Id { get; set; }
public string Name{ get; set; } = string.Empty;
public string? Surname { get; set; }
public ICollection<FeaturesViewModel>? FeaturesViewModel{ get; set; }
}
public class FeaturesViewModel
{
public virtual PersonViewModel? People { get; set; }
public int Id { get; set; }
public string? Feature{ get; set; } = string.Empty;
public DateTime? DateLastValue { get; set; } = default;
public String? StringLastValue { get; set; } = string.Empty;
}
CONTROLLER
var myPeople = _context.Person!.Select(x => new PersonViewModel
{
Id = (int)x.Id!,
Name = x.Name,
Surname = x.Surname
});
var myFeatures = _context.Features!.Select(x => new FeaturesViewModel
{
Id = x.Id,
PersonId = (int)x.PersonId!,
Feature = x.Feature,
DateLastValue = x.Features.Feature-Value.Max(x => x.DateTimeMeasure),
//StringLastValue = ????
});
ViewModel VM = new();
VM.People = myPeople;
VM.Features = myFeatures;
return View(VM);
VIEW
@model VM
<div class="row">
<div class="col-md-4">
<h3>PEOPLE: </h3>
</div>
</div>
<br />
<div class="row">
@foreach (var item in Model.People!)
{
@*<img [email protected] class="card-img-top" alt="...">*@
<div class="card-body">
<h5 class="card-title">@item.Surname.ToUpper()</h5>
<h6 class="card-text mb-2 text-muted">@Html.DisplayFor(modelItem => item.Name)</h6>
<hr />
<p class="card-text">
FEATURES:
</p>
<hr/>
<p class="card-text">
@foreach (var feat in Model.Features!.Where(x=>x.PersonId == item.Id))
{
<div class="row">
<div class="col">
@feat.Feature - @feat.StringFeatureValue ??
</div>
</div>
} //end of Features
</p>
} //end of foreach People
</div>
A:
You can try this way:
Your view models
public class ViewModel
{
//the only one list you need in the view
public IEnumerable<PersonViewModel> People { get; set; }
}
public class PersonViewModel
{
public int Id { get; set; }
public string Name{ get; set; }
public string Surname { get; set; }
// All persons features
public ICollection<FeaturesViewModel> Features { get; set; }
}
public class FeaturesViewModel
{
public int Id { get; set; }
public string Feature{ get; set; }
// last feature value by datetime
public FeatureValueViewModel LastFeatureValue { get; set; }
}
public class FeatureValueViewModel
{
public DateTime DateTimeMeassure { get; set; }
public string MeasureValue { get; set; }
}
Controller
//If you have the right relations, You can take all the data you need that way
var people = _context.Person.Select(x => new PersonViewModel
{
Id = x.Id,
Name = x.Name,
Surname = x.Surname
Features = x.Features.Select(y => new FeaturesViewModel
{
Id = y.Id,
Feature = y.Feature,
LastFeatureValue = y.FeatureValues.OrderByDescending(x => x.DateTimeMeashure).FirstOrDefault(),
}).ToList();
}).ToList();
ViewModel VM = new();
VM.People = myPeople;
return View(VM);
View
@model VM
<div class="row">
<div class="col-md-4">
<h3>PEOPLE: </h3>
</div>
</div>
<br />
<div class="row">
@foreach (var person in Model.People)
{
@*<img [email protected] class="card-img-top" alt="...">*@
<div class="card-body">
<h5 class="card-title">@item.Surname.ToUpper()</h5>
<h6 class="card-text mb-2 text-muted">@Html.DisplayFor(modelItem => item.Name)</h6>
<hr />
<p class="card-text">
FEATURES:
</p>
<hr/>
<p class="card-text">
//
@foreach (var feat in person.Features )
{
<div class="row">
<div class="col">
@feat.LastFeatureValue.DateTimeMeassure - @feat.LastFeatureValue.MeasureValue
</div>
</div>
} //end of Features
</p>
} //end of foreach People
</div>
| Entity Framework Core, retrieve data from second level related table with aggregation function | I have a situation that I'd like to try that doesn't let me sleep at night. in essence, I would like to display cards (bootstrap type) in a View which, for each person (it's just an example for analogy), show the associated characteristics and the value of the last measurement (for example, imagining that the height was measured at 5 years and then at 10 years ). In a situation similar to that of the image below, it is a best practice to adopt the only logic that I was able to implement (sorry but I am a hobbyist who loves programming, unfortunately I have not studied for that) reported in the code below or Is it better to use Partial View? In the second case, how are they implemented for the second level or better, how is the value of the last date associated with each characteristic?
What I would get: https://codepen.io/lucora/pen/jOKQpGY
And this is my approach:
VIEW MODEL
public class ViewModel
{
public IEnumerable<PersonViewModel>? People { get; set; }
public IEnumerable<FeaturesViewModel>? Features { get; set; }
}
public class PersonViewModel
{
public int Id { get; set; }
public string Name{ get; set; } = string.Empty;
public string? Surname { get; set; }
public ICollection<FeaturesViewModel>? FeaturesViewModel{ get; set; }
}
public class FeaturesViewModel
{
public virtual PersonViewModel? People { get; set; }
public int Id { get; set; }
public string? Feature{ get; set; } = string.Empty;
public DateTime? DateLastValue { get; set; } = default;
public String? StringLastValue { get; set; } = string.Empty;
}
CONTROLLER
var myPeople = _context.Person!.Select(x => new PersonViewModel
{
Id = (int)x.Id!,
Name = x.Name,
Surname = x.Surname
});
var myFeatures = _context.Features!.Select(x => new FeaturesViewModel
{
Id = x.Id,
PersonId = (int)x.PersonId!,
Feature = x.Feature,
DateLastValue = x.Features.Feature-Value.Max(x => x.DateTimeMeasure),
//StringLastValue = ????
});
ViewModel VM = new();
VM.People = myPeople;
VM.Features = myFeatures;
return View(VM);
VIEW
@model VM
<div class="row">
<div class="col-md-4">
<h3>PEOPLE: </h3>
</div>
</div>
<br />
<div class="row">
@foreach (var item in Model.People!)
{
@*<img [email protected] class="card-img-top" alt="...">*@
<div class="card-body">
<h5 class="card-title">@item.Surname.ToUpper()</h5>
<h6 class="card-text mb-2 text-muted">@Html.DisplayFor(modelItem => item.Name)</h6>
<hr />
<p class="card-text">
FEATURES:
</p>
<hr/>
<p class="card-text">
@foreach (var feat in Model.Features!.Where(x=>x.PersonId == item.Id))
{
<div class="row">
<div class="col">
@feat.Feature - @feat.StringFeatureValue ??
</div>
</div>
} //end of Features
</p>
} //end of foreach People
</div>
| [
"You can try this way:\nYour view models\npublic class ViewModel\n{\n //the only one list you need in the view\n public IEnumerable<PersonViewModel> People { get; set; }\n}\n\npublic class PersonViewModel\n{\n public int Id { get; set; }\n public string Name{ get; set; }\n public string Surname { get; set; }\n // All persons features \n public ICollection<FeaturesViewModel> Features { get; set; }\n}\n\npublic class FeaturesViewModel\n{\n public int Id { get; set; }\n public string Feature{ get; set; }\n // last feature value by datetime\n public FeatureValueViewModel LastFeatureValue { get; set; }\n}\n\npublic class FeatureValueViewModel\n{\n public DateTime DateTimeMeassure { get; set; }\n public string MeasureValue { get; set; }\n}\n\nController\n//If you have the right relations, You can take all the data you need that way\nvar people = _context.Person.Select(x => new PersonViewModel\n{\n Id = x.Id,\n Name = x.Name,\n Surname = x.Surname\n Features = x.Features.Select(y => new FeaturesViewModel\n {\n Id = y.Id,\n Feature = y.Feature,\n LastFeatureValue = y.FeatureValues.OrderByDescending(x => x.DateTimeMeashure).FirstOrDefault(),\n }).ToList();\n\n}).ToList();\n\n\nViewModel VM = new();\nVM.People = myPeople;\n\nreturn View(VM); \n\n\nView\n@model VM\n\n<div class=\"row\">\n <div class=\"col-md-4\">\n <h3>PEOPLE: </h3>\n </div>\n</div>\n<br />\n\n<div class=\"row\">\n\n @foreach (var person in Model.People)\n {\n @*<img [email protected] class=\"card-img-top\" alt=\"...\">*@\n <div class=\"card-body\">\n <h5 class=\"card-title\">@item.Surname.ToUpper()</h5>\n <h6 class=\"card-text mb-2 text-muted\">@Html.DisplayFor(modelItem => item.Name)</h6>\n <hr />\n <p class=\"card-text\">\n FEATURES:\n </p>\n <hr/>\n <p class=\"card-text\">\n //\n @foreach (var feat in person.Features )\n {\n <div class=\"row\">\n <div class=\"col\">\n @feat.LastFeatureValue.DateTimeMeassure - @feat.LastFeatureValue.MeasureValue\n </div>\n </div>\n } //end of Features \n </p>\n } //end of foreach People\n</div>\n\n"
] | [
1
] | [] | [] | [
"asp.net_core",
"c#",
"entity_framework_core"
] | stackoverflow_0074674407_asp.net_core_c#_entity_framework_core.txt |
Q:
C# WPF static resource containing other static resources
I've just started learning WPF but I can't seem to figure out how to combine two or more string static resources in XAML. I have two static resources, UntitledFileName ("Untitled") and ApplicationName ("SomeAppName"). The third resource, DefaultWindowTitle, should be composed of the aforementioned resources, and should contain the value "Untitled - SomeAppName". How should I specify the two static resources when defining DefaultWindowTitle?
<sys:String x:Key="UntitledFileName">Untitled</sys:String>
<sys:String x:Key="ApplicationName">SomeAppName</sys:String>
<sys:String x:Key="DefaultWindowTitle">...</sys:String>
A:
I was planning to use "DefaultWindowTitle" as the window's title.
Perhaps this implementation will suit you:
<Window.Title>
<MultiBinding StringFormat="{}{0} - {1}">
<Binding Source="{StaticResource UntitledFileName}"/>
<Binding Source="{StaticResource ApplicationName}"/>
<MultiBinding>
</Window.Title>
| C# WPF static resource containing other static resources | I've just started learning WPF but I can't seem to figure out how to combine two or more string static resources in XAML. I have two static resources, UntitledFileName ("Untitled") and ApplicationName ("SomeAppName"). The third resource, DefaultWindowTitle, should be composed of the aforementioned resources, and should contain the value "Untitled - SomeAppName". How should I specify the two static resources when defining DefaultWindowTitle?
<sys:String x:Key="UntitledFileName">Untitled</sys:String>
<sys:String x:Key="ApplicationName">SomeAppName</sys:String>
<sys:String x:Key="DefaultWindowTitle">...</sys:String>
| [
"\nI was planning to use \"DefaultWindowTitle\" as the window's title.\n\nPerhaps this implementation will suit you:\n<Window.Title>\n <MultiBinding StringFormat=\"{}{0} - {1}\">\n <Binding Source=\"{StaticResource UntitledFileName}\"/>\n <Binding Source=\"{StaticResource ApplicationName}\"/>\n <MultiBinding>\n</Window.Title>\n\n"
] | [
0
] | [] | [] | [
"c#",
"staticresource",
"string",
"wpf",
"xaml"
] | stackoverflow_0074667519_c#_staticresource_string_wpf_xaml.txt |
Q:
Computing KL-divergence over 2 estimated gaussian KDEs
I have two datasets with the same features and would like to estimate the "distance of distributions" between the two datasets. I had the idea to estimate a gaussian KDE in each of the datasets and computing the KL-divergence between the estimated KDEs. However, I am struggling to compute the "distance" between the distributions. This is what I have so far:
import numpy as np
from scipy import stats
from scipy.stats import entropy
dataset1 = np.random.rand(50)
dataset2 = np.random.rand(49)
kernel1 = stats.gaussian_kde(dataset1)
kernel2 = stats.gaussian_kde(dataset2)
I know I can use entropy(pk, qk) to calculate the kl-divergence but I don't understand how do that starting from the kernels. I thought about generating some random points and using entropy(kernel1.pdf(points),kernel2.pdf(points)) but the pdf function outputs some weird number (higher than 1 sometimes, does it mean it assigns more than 100% of prob??), and I am not sure the output I get is correct.
If anyone knows how to calculate the distance between the 2 gaussian kde kernels I would be very thankful.
A:
There is no closed form solution for KL between two mixtures of gaussians.
KL(p, q) := -E_p log [p(x)/q(x)]
so you can use MC estimator:
def KL_mc(p, q, n=100):
points = p.resample(n)
p_pdf = p.pdf(points)
q_pdf = q.pdf(points)
return np.log(p_pdf / q_pdf).mean()
Note:
you might need to add some clipping to avoid 0s and infinities
depending on the dimensionality of the space this can require quite large n
(higher than 1 sometimes, does it mean it assigns more than 100% of prob??)
PDF is not a probability. Not for continuous distributions. It is a probability density. It is a relative measure. Probability assigned to any single value is always 0, but probability of sampling an element in a given set/interval equals integral of pdf over this set/integral (and thus pointwise it can have a weight >1, but over a "small enough" set)
More general solution
Overall, unless you really need KL for theoretical reasons, there are divergences that are better suited to deal with gaussian mixtures (e.g. such that have closed form solutions), for example Cauchy-Schwarz Divergence.
In particular you can look at Maximum Entropy Linear Manifold which is based exactly on computing CS divergences between KDEs of points. You can see python implementation in melm/dcsk.py in value(v) function on github. In your case you do not want a projection so just put v = identity matrix.
def value(self, v):
# We need matrix, not vector
v = v.reshape(-1, self.k)
ipx0 = self._ipx(self.x0, self.x0, v)
ipx1 = self._ipx(self.x1, self.x1, v)
ipx2 = self._ipx(self.x0, self.x1, v)
return np.log(ipx0) + np.log(ipx1) - 2 * np.log(ipx2)
def _f1(self, X0, X1, v):
Hxy = self.gamma * self.gamma * self._H(X0, X1)
vHv = v.T.dot(Hxy).dot(v)
return 1.0 / (X0.shape[0] * X1.shape[0] * np.sqrt(la.det(vHv)) * (2 * np.pi) ** (self.k / 2))
def _f2(self, X0, X1, v):
Hxy = self.gamma * self.gamma * self._H(X0, X1)
vHv = v.T.dot(Hxy).dot(v)
vHv_inv = la.inv(vHv)
vx0 = X0.dot(v)
vx1 = X1.dot(v)
vx0c = vx0.dot(vHv_inv)
vx1c = vx1.dot(vHv_inv)
ret = 0.0
for i in range(X0.shape[0]):
ret += np.exp(-0.5 * ((vx0c[i] - vx1c) * (vx0[i] - vx1)).sum(axis=1)).sum()
return ret
def _ipx(self, X0, X1, v):
return self._f1(X0, X1, v) * self._f2(X0, X1, v)
Main difference between CS and KL is that KL requires your to compute integral of a logarithm of a pdf and CS computes logarithm of the integral. It happens, that with gaussian mixtures it is this integration of the logarithm that is a problem, without the logarithm everything is easy, and thus DCS is preferable.
| Computing KL-divergence over 2 estimated gaussian KDEs | I have two datasets with the same features and would like to estimate the "distance of distributions" between the two datasets. I had the idea to estimate a gaussian KDE in each of the datasets and computing the KL-divergence between the estimated KDEs. However, I am struggling to compute the "distance" between the distributions. This is what I have so far:
import numpy as np
from scipy import stats
from scipy.stats import entropy
dataset1 = np.random.rand(50)
dataset2 = np.random.rand(49)
kernel1 = stats.gaussian_kde(dataset1)
kernel2 = stats.gaussian_kde(dataset2)
I know I can use entropy(pk, qk) to calculate the kl-divergence but I don't understand how do that starting from the kernels. I thought about generating some random points and using entropy(kernel1.pdf(points),kernel2.pdf(points)) but the pdf function outputs some weird number (higher than 1 sometimes, does it mean it assigns more than 100% of prob??), and I am not sure the output I get is correct.
If anyone knows how to calculate the distance between the 2 gaussian kde kernels I would be very thankful.
| [
"There is no closed form solution for KL between two mixtures of gaussians.\nKL(p, q) := -E_p log [p(x)/q(x)]\n\nso you can use MC estimator:\ndef KL_mc(p, q, n=100):\n points = p.resample(n)\n p_pdf = p.pdf(points)\n q_pdf = q.pdf(points)\n return np.log(p_pdf / q_pdf).mean()\n\nNote:\n\nyou might need to add some clipping to avoid 0s and infinities\ndepending on the dimensionality of the space this can require quite large n\n\n\n(higher than 1 sometimes, does it mean it assigns more than 100% of prob??)\n\nPDF is not a probability. Not for continuous distributions. It is a probability density. It is a relative measure. Probability assigned to any single value is always 0, but probability of sampling an element in a given set/interval equals integral of pdf over this set/integral (and thus pointwise it can have a weight >1, but over a \"small enough\" set)\nMore general solution\nOverall, unless you really need KL for theoretical reasons, there are divergences that are better suited to deal with gaussian mixtures (e.g. such that have closed form solutions), for example Cauchy-Schwarz Divergence.\nIn particular you can look at Maximum Entropy Linear Manifold which is based exactly on computing CS divergences between KDEs of points. You can see python implementation in melm/dcsk.py in value(v) function on github. In your case you do not want a projection so just put v = identity matrix.\n def value(self, v):\n # We need matrix, not vector\n v = v.reshape(-1, self.k)\n\n ipx0 = self._ipx(self.x0, self.x0, v)\n ipx1 = self._ipx(self.x1, self.x1, v)\n ipx2 = self._ipx(self.x0, self.x1, v)\n\n return np.log(ipx0) + np.log(ipx1) - 2 * np.log(ipx2)\n\n def _f1(self, X0, X1, v):\n Hxy = self.gamma * self.gamma * self._H(X0, X1)\n vHv = v.T.dot(Hxy).dot(v)\n return 1.0 / (X0.shape[0] * X1.shape[0] * np.sqrt(la.det(vHv)) * (2 * np.pi) ** (self.k / 2))\n\n def _f2(self, X0, X1, v):\n Hxy = self.gamma * self.gamma * self._H(X0, X1)\n vHv = v.T.dot(Hxy).dot(v)\n vHv_inv = la.inv(vHv)\n\n vx0 = X0.dot(v)\n vx1 = X1.dot(v)\n vx0c = vx0.dot(vHv_inv)\n vx1c = vx1.dot(vHv_inv)\n\n ret = 0.0\n for i in range(X0.shape[0]):\n ret += np.exp(-0.5 * ((vx0c[i] - vx1c) * (vx0[i] - vx1)).sum(axis=1)).sum()\n return ret\n\n def _ipx(self, X0, X1, v):\n return self._f1(X0, X1, v) * self._f2(X0, X1, v)\n\nMain difference between CS and KL is that KL requires your to compute integral of a logarithm of a pdf and CS computes logarithm of the integral. It happens, that with gaussian mixtures it is this integration of the logarithm that is a problem, without the logarithm everything is easy, and thus DCS is preferable.\n"
] | [
1
] | [] | [] | [
"machine_learning",
"python",
"scikit_learn",
"statistics"
] | stackoverflow_0074675438_machine_learning_python_scikit_learn_statistics.txt |
Q:
How do I reference CSS variable in Tailwind to define a custom class?
I want to define a variable in a CSS file like this:
:root {
--sidebar-width: 56;
}
I'd like to now refer to that in a component to define that component's width:
<div className="w-[var(--sidebar-width)]">
<MySidebar>
</div>
This doesn't work. What I'm trying to achieve is to add the w-56 class to that component and to do so as a variable so that I can refer to that variable in several places. Is this possible and if so, how do I specify this?
A:
I'm pretty sure it's impossible.
Just do this instead:
:root {
--sidebar-width: 56;
}
<div className="w-[calc(4px*var(--sidebar-width)]">
<MySidebar>
</div>
1 tailwind unit is 4px.
A:
Tailwind does support CSS custom properties using arbitrary values.
:root {
--sidebar-width: 56px;
}
<script src="https://cdn.tailwindcss.com"></script>
<div class="w-[length:var(--sidebar-width)] bg-red-900">Test</div>
| How do I reference CSS variable in Tailwind to define a custom class? | I want to define a variable in a CSS file like this:
:root {
--sidebar-width: 56;
}
I'd like to now refer to that in a component to define that component's width:
<div className="w-[var(--sidebar-width)]">
<MySidebar>
</div>
This doesn't work. What I'm trying to achieve is to add the w-56 class to that component and to do so as a variable so that I can refer to that variable in several places. Is this possible and if so, how do I specify this?
| [
"I'm pretty sure it's impossible.\nJust do this instead:\n:root {\n --sidebar-width: 56;\n}\n\n<div className=\"w-[calc(4px*var(--sidebar-width)]\">\n <MySidebar>\n</div>\n\n1 tailwind unit is 4px.\n",
"Tailwind does support CSS custom properties using arbitrary values.\n\n\n:root {\n --sidebar-width: 56px;\n}\n<script src=\"https://cdn.tailwindcss.com\"></script>\n<div class=\"w-[length:var(--sidebar-width)] bg-red-900\">Test</div>\n\n\n\n"
] | [
1,
0
] | [] | [] | [
"css",
"tailwind_css"
] | stackoverflow_0074675933_css_tailwind_css.txt |
Q:
Rearranging with pandas melt
I am trying to rearrange a DataFrame. Currently, I have 1035 rows and 24 columns, one for each hour of the day. I want to make this a array with 1035*24 rows. If you want to see the data it can be extracted from the following JSON file:
url = "https://www.svk.se/services/controlroom/v2/situation?date={}&biddingArea=SE1"
svk = []
for i in parsing_range_svk:
data_json_svk = json.loads(urlopen(url.format(i)).read())
svk.append([v["y"] for v in data_json_svk["Data"][0]["data"]])
This is the code I am using to rearrange this data, but it is not doing the job. The first obeservation is in the right place, then it starts getting messy. I have not been able to figure out where each observation goes.
svk = pd.DataFrame(svk)
date_start1 = datetime(2020, 1, 1)
date_range1 = [date_start1 + timedelta(days=x) for x in range(1035)]
date_svk = pd.DataFrame(date_range1, columns=['date'])
svk['date'] = date_svk['date']
svk.drop(24, axis=1, inplace=True)
consumption_svk_1 = (svk.melt('date', value_name='SE1_C')
.assign(date = lambda x: x['date'] +
pd.to_timedelta(x.pop('variable').astype(float), unit='h'))
.sort_values('date', ignore_index=True))
A:
To rearrange the DataFrame in the desired way, you can use the pandas.DataFrame.stack method to reshape the DataFrame from wide to long format. Then, you can drop the variable column and rename the date column to the desired name.
consumption_svk_1 = (svk.stack()
.reset_index()
.rename(columns={'level_1': 'hour', 'date': 'timestamp'})
.sort_values('timestamp', ignore_index=True))
This should give you a DataFrame with 1035*24 rows and three columns: timestamp, hour, and value. Note that the timestamp column is not in the correct format, so you will need to convert it to a datetime format using the pandas.to_datetime method. Here is an example:
consumption_svk_1['timestamp'] = pd.to_datetime(consumption_svk_1['timestamp'])
| Rearranging with pandas melt | I am trying to rearrange a DataFrame. Currently, I have 1035 rows and 24 columns, one for each hour of the day. I want to make this a array with 1035*24 rows. If you want to see the data it can be extracted from the following JSON file:
url = "https://www.svk.se/services/controlroom/v2/situation?date={}&biddingArea=SE1"
svk = []
for i in parsing_range_svk:
data_json_svk = json.loads(urlopen(url.format(i)).read())
svk.append([v["y"] for v in data_json_svk["Data"][0]["data"]])
This is the code I am using to rearrange this data, but it is not doing the job. The first obeservation is in the right place, then it starts getting messy. I have not been able to figure out where each observation goes.
svk = pd.DataFrame(svk)
date_start1 = datetime(2020, 1, 1)
date_range1 = [date_start1 + timedelta(days=x) for x in range(1035)]
date_svk = pd.DataFrame(date_range1, columns=['date'])
svk['date'] = date_svk['date']
svk.drop(24, axis=1, inplace=True)
consumption_svk_1 = (svk.melt('date', value_name='SE1_C')
.assign(date = lambda x: x['date'] +
pd.to_timedelta(x.pop('variable').astype(float), unit='h'))
.sort_values('date', ignore_index=True))
| [
"To rearrange the DataFrame in the desired way, you can use the pandas.DataFrame.stack method to reshape the DataFrame from wide to long format. Then, you can drop the variable column and rename the date column to the desired name.\nconsumption_svk_1 = (svk.stack()\n .reset_index()\n .rename(columns={'level_1': 'hour', 'date': 'timestamp'})\n .sort_values('timestamp', ignore_index=True))\n\nThis should give you a DataFrame with 1035*24 rows and three columns: timestamp, hour, and value. Note that the timestamp column is not in the correct format, so you will need to convert it to a datetime format using the pandas.to_datetime method. Here is an example:\nconsumption_svk_1['timestamp'] = pd.to_datetime(consumption_svk_1['timestamp'])\n\n"
] | [
0
] | [] | [] | [
"json",
"pandas_melt",
"python"
] | stackoverflow_0074675971_json_pandas_melt_python.txt |
Q:
I'm trying to split and remove unnecessary characters from a column using pandas
I'm trying to remove all the unnecessary words and characters from the values in this column. I want the rows to contain 'Entry level', 'Mid-Senior level' etc. Also is there anyway to translate the arabic to english or shall I use replace function?
df_africa.seniority_level.value_counts()
{'Seniority level': 'Entry level'} 1073
{'Seniority level': 'Mid-Senior level'} 695
{'Seniority level': 'Associate'} 481
{'Seniority level': 'Not Applicable'} 150
{'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} 115
{'مستوى الأقدمية': 'مستوى المبتدئين'} 82
{'نوع التوظيف': 'دوام كامل'} 73
{'مستوى الأقدمية': 'مساعد'} 48
{'مستوى الأقدمية': 'غير مطبق'} 42
{'Seniority level': 'Internship'} 39
{'Employment type': 'Contract'} 21
{'Employment type': 'Full-time'} 1
I've tried the split function but i couldn't get it to work properly.
A:
IIUC, use this :
import ast
#Is there any non-latin letters?
m = ~df_africa["seniority_level"].str.contains("[A-Z]")
s = df_africa["seniority_level"].apply(lambda x: ast.literal_eval(x))
df_africa["new_col"] = s.str["مستوى الأقدمية"].where(m, s.str["Seniority level"])
If you need to translate the words extracted, use deep-translator :
#pip install -U deep-translator
from deep_translator import GoogleTranslator
df_africa["new_col (TRA)"] = (
df_africa["new_col"]
.fillna("")
.apply(lambda x: GoogleTranslator(source="arabic")
.translate(x)
.title())
.replace("", None)
)
Even though I suggest you to use a custom dict using map to get the appropriate translation.
# Output :
display(df_africa)
A:
Would be useful to know the type of the 'seniority_level' column, but I'm just gonna assume the column is made up of literal strings (e.g. "{'Seniority level': 'Entry level'}")
Can translate all the text with this googletrans package, it piggybacks off google translate so use it while it lasts. Make sure to install version 4.0.0rc1.
$ pip install googletrans==4.0.0rc1
translate:
from googletrans import Translator
translator = Translator()
def translate_to_english(words):
for character in words:
if ord(character) > 127:
return translator.translate(words, dest="en").text
return words
df_africa["new_seniority_level"] = df_africa["seniority_level"].map(lambda row: translate_to_english(row))
print(df_africa)
seniority_level new_seniority_level
0 {'Seniority level': 'Entry level'} {'Seniority level': 'Entry level'}
1 {'Seniority level': 'Mid-Senior level'} {'Seniority level': 'Mid-Senior level'}
2 {'Seniority level': 'Associate'} {'Seniority level': 'Associate'}
3 {'Seniority level': 'Not Applicable'} {'Seniority level': 'Not Applicable'}
4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} {'Seniority level': 'average level of seniority'}
5 {'مستوى الأقدمية': 'مستوى المبتدئين'} {'Seniority level': 'beginners' level'}
6 {'نوع التوظيف': 'دوام كامل'} {'Recruitment type': 'full time'}
7 {'مستوى الأقدمية': 'مساعد'} {'Seniority level': 'assistant'}
8 {'مستوى الأقدمية': 'غير مطبق'} {'Senior level': 'unprecedented'}
9 {'Seniority level': 'Internship'} {'Seniority level': 'Internship'}
10 {'Employment type': 'Contract'} {'Employment type': 'Contract'}
11 {'Employment type': 'Full-time'} {'Employment type': 'Full-time'}
then get the text you want:
import re
df_africa["new_seniority_level"] = df_africa["new_seniority_level"].map(lambda row: re.match(r".+: '(.*)'", row).group(1))
print(df_africa)
seniority_level new_seniority_level
0 {'Seniority level': 'Entry level'} Entry level
1 {'Seniority level': 'Mid-Senior level'} Mid-Senior level
2 {'Seniority level': 'Associate'} Associate
3 {'Seniority level': 'Not Applicable'} Not Applicable
4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} average level of seniority
5 {'مستوى الأقدمية': 'مستوى المبتدئين'} beginners' level
6 {'نوع التوظيف': 'دوام كامل'} full time
7 {'مستوى الأقدمية': 'مساعد'} assistant
8 {'مستوى الأقدمية': 'غير مطبق'} unprecedented
9 {'Seniority level': 'Internship'} Internship
10 {'Employment type': 'Contract'} Contract
11 {'Employment type': 'Full-time'} Full-time
Look into official google translate api if googletrans eventually breaks.
| I'm trying to split and remove unnecessary characters from a column using pandas | I'm trying to remove all the unnecessary words and characters from the values in this column. I want the rows to contain 'Entry level', 'Mid-Senior level' etc. Also is there anyway to translate the arabic to english or shall I use replace function?
df_africa.seniority_level.value_counts()
{'Seniority level': 'Entry level'} 1073
{'Seniority level': 'Mid-Senior level'} 695
{'Seniority level': 'Associate'} 481
{'Seniority level': 'Not Applicable'} 150
{'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} 115
{'مستوى الأقدمية': 'مستوى المبتدئين'} 82
{'نوع التوظيف': 'دوام كامل'} 73
{'مستوى الأقدمية': 'مساعد'} 48
{'مستوى الأقدمية': 'غير مطبق'} 42
{'Seniority level': 'Internship'} 39
{'Employment type': 'Contract'} 21
{'Employment type': 'Full-time'} 1
I've tried the split function but i couldn't get it to work properly.
| [
"IIUC, use this :\nimport ast\n\n#Is there any non-latin letters?\nm = ~df_africa[\"seniority_level\"].str.contains(\"[A-Z]\")\n\ns = df_africa[\"seniority_level\"].apply(lambda x: ast.literal_eval(x))\ndf_africa[\"new_col\"] = s.str[\"مستوى الأقدمية\"].where(m, s.str[\"Seniority level\"])\n\nIf you need to translate the words extracted, use deep-translator :\n#pip install -U deep-translator\nfrom deep_translator import GoogleTranslator\n\ndf_africa[\"new_col (TRA)\"] = (\n df_africa[\"new_col\"]\n .fillna(\"\")\n .apply(lambda x: GoogleTranslator(source=\"arabic\")\n .translate(x)\n .title())\n .replace(\"\", None)\n )\n\nEven though I suggest you to use a custom dict using map to get the appropriate translation.\n# Output :\ndisplay(df_africa)\n\n",
"Would be useful to know the type of the 'seniority_level' column, but I'm just gonna assume the column is made up of literal strings (e.g. \"{'Seniority level': 'Entry level'}\")\nCan translate all the text with this googletrans package, it piggybacks off google translate so use it while it lasts. Make sure to install version 4.0.0rc1.\n$ pip install googletrans==4.0.0rc1\n\ntranslate:\nfrom googletrans import Translator\n\ntranslator = Translator()\n\ndef translate_to_english(words):\n for character in words:\n if ord(character) > 127:\n return translator.translate(words, dest=\"en\").text\n return words\n\ndf_africa[\"new_seniority_level\"] = df_africa[\"seniority_level\"].map(lambda row: translate_to_english(row))\nprint(df_africa)\n\n\n seniority_level new_seniority_level\n0 {'Seniority level': 'Entry level'} {'Seniority level': 'Entry level'}\n1 {'Seniority level': 'Mid-Senior level'} {'Seniority level': 'Mid-Senior level'}\n2 {'Seniority level': 'Associate'} {'Seniority level': 'Associate'}\n3 {'Seniority level': 'Not Applicable'} {'Seniority level': 'Not Applicable'}\n4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} {'Seniority level': 'average level of seniority'}\n5 {'مستوى الأقدمية': 'مستوى المبتدئين'} {'Seniority level': 'beginners' level'}\n6 {'نوع التوظيف': 'دوام كامل'} {'Recruitment type': 'full time'}\n7 {'مستوى الأقدمية': 'مساعد'} {'Seniority level': 'assistant'}\n8 {'مستوى الأقدمية': 'غير مطبق'} {'Senior level': 'unprecedented'}\n9 {'Seniority level': 'Internship'} {'Seniority level': 'Internship'}\n10 {'Employment type': 'Contract'} {'Employment type': 'Contract'}\n11 {'Employment type': 'Full-time'} {'Employment type': 'Full-time'}\n\n\nthen get the text you want:\nimport re\n\ndf_africa[\"new_seniority_level\"] = df_africa[\"new_seniority_level\"].map(lambda row: re.match(r\".+: '(.*)'\", row).group(1))\nprint(df_africa)\n\n seniority_level new_seniority_level\n0 {'Seniority level': 'Entry level'} Entry level\n1 {'Seniority level': 'Mid-Senior level'} Mid-Senior level\n2 {'Seniority level': 'Associate'} Associate\n3 {'Seniority level': 'Not Applicable'} Not Applicable\n4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} average level of seniority\n5 {'مستوى الأقدمية': 'مستوى المبتدئين'} beginners' level\n6 {'نوع التوظيف': 'دوام كامل'} full time\n7 {'مستوى الأقدمية': 'مساعد'} assistant\n8 {'مستوى الأقدمية': 'غير مطبق'} unprecedented\n9 {'Seniority level': 'Internship'} Internship\n10 {'Employment type': 'Contract'} Contract\n11 {'Employment type': 'Full-time'} Full-time\n\nLook into official google translate api if googletrans eventually breaks.\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python",
"python_3.x",
"split"
] | stackoverflow_0074674980_pandas_python_python_3.x_split.txt |
Q:
Selenium driver hanging on OS alert
I'm using Selenium in Python (3.11) with a Firefox (107) driver.
With the driver I navigate to a page which, after several actions, triggers an OS alert (prompting me to launch a program). When this alert pops up, the driver hangs, and only once it is closed manually does my script continue to run.
I have tried driver.quit(), as well as using
os.system("taskkill /F /pid " + str(process.ProcessId))
with the driver's PID, with no luck.
I have managed to prevent the pop-up from popping up with
options.set_preference("security.external_protocol_requires_permission", False)
but the code still hangs the same way at the point where the popup would have popped up.
I don't care whether the program launches or not, I just need my code to not require human intervention at this key point.
here is a minimal example of what I currently have:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
#this is the point where the pop up appears
reqs = driver.requests
print("Success!")
driver.quit()
A:
There are some prefs you can try
profile = webdriver.FirefoxProfile()
profile.set_preference('dom.push.enabled', False)
# or
profile = webdriver.FirefoxProfile()
profile.set_preference('dom.webnotifications.enabled', False)
profile.set_preference('dom.webnotifications.serviceworker.enabled', False)
A:
Have you tried setting this preference to prevent the particular popup:
profile.set_preference('browser.helperApps.neverAsk.openFile', 'typeOfFile')
# e.g. profile.set_preference('browser.helperApps.neverAsk.openFile', 'application/xml,application/octet-stream')
Or have you tried just dismissing the popup:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
....
pass_field.send_keys(Keys.ENTER)
#this is the point where the pop up appears
WebDriverWait(driver, 5).until(EC.alert_is_present).dismiss()
reqs = driver.requests
...
A:
check this checkbox manually then open the app for every app associated to the links you use, then it will work normally.
A:
It sounds like you're having trouble with an alert that pops up when using Selenium with Firefox. One way to handle this is to use the Alert class in Selenium. This class provides methods for accepting, dismissing, or sending keys to the alert.
Here's an example of how you could use the Alert class to handle the alert in your code:
# After navigating to the page that triggers the alert
alert = driver.switch_to.alert
# Use the alert methods to handle the alert as needed
alert.accept() # This will accept the alert, launching the program
# or
alert.dismiss() # This will dismiss the alert, not launching the program
# or
alert.send_keys("some text") # This will enter text in the alert and accept it
Alternatively, you could use the unhandled_prompt_behavior option in the FirefoxOptions class to specify how Selenium should handle unhandled alerts. This option takes one of three values:
accept: Accept the alert (launch the program in your case)
dismiss: Dismiss the alert (not launch the program in your case)
ignore: Ignore the alert, allowing the script to continue running
Here's an example of how you could use the unhandled_prompt_behavior option to handle the alert:
from selenium.webdriver.firefox.options import Options
# Set the unhandled_prompt_behavior option to dismiss
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
options.unhandled_prompt_behavior = "dismiss"
driver = webdriver.Firefox(options=options)
# Navigate to the page that triggers the alert
driver.get(url)
# The alert should be automatically dismissed by the unhandled_prompt_behavior setting
A:
I believe you should be able to handle the OS alert by calling driver.switch_to.alert.accept() or driver.switch_to.alert.dismiss() after navigating to the page where the alert pops up. This will automatically accept or dismiss the alert and allow your script to continue running. Here is an example of how you could use this in your code:
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Handle the alert by dismissing it
driver.switch_to.alert.dismiss()
# Your script can continue running now
reqs = driver.requests
print("Success!")
driver.quit()
Alternatively, if you don't want to handle the alert and just want to avoid it entirely, you can use the ExpectedConditions class from the selenium.webdriver.support.ui module to wait for the alert to be dismissed before continuing. Here is an example of how you could use this:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Wait for the alert to be dismissed before continuing
WebDriverWait(driver, 10).until(EC.alert_is_not_present())
# Your script can continue running now
reqs = driver.requests
print("Success!")
driver.quit()
I hope this helps! Let me know if you have any other questions.
A:
To handle the OS alert, you can use the WebDriverWait class from the selenium.webdriver.common.by module and the Alert class from the selenium.webdriver.common.alerts module. You can use these classes to wait for the alert to be present and then either accept or dismiss the alert depending on what you want to do.
Here is an example:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.alerts import Alert
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Create an instance of Firefox WebDriver
driver = webdriver.Firefox()
# Go to the page
driver.get(url)
user_field = driver.find_element(By.ID, "UserName")
user_field.send_keys(username)
pass_field = driver.find_element(By.ID, "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Wait for the alert to be present and handle it
WebDriverWait(driver, 10).until(EC.alert_is_present())
alert = Alert(driver)
alert.accept() # or alert.dismiss() to dismiss the alert
reqs = driver.requests
print("Success!")
driver.quit()
A:
One way to handle the alert that pops up is to use the Alert class in Selenium. You can use the switch_to_alert() method to switch to the alert, and then use the accept() or dismiss() method to accept or dismiss the alert.
Here is an example:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.alert import Alert
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Handle the alert if it appears
try:
alert = Alert(driver)
alert.dismiss()
except:
pass
reqs = driver.requests
print("Success!")
driver.quit()
Alternatively, you can use the execute_script() method to handle the alert. This method allows you to execute JavaScript code in the context of the current page. You can use this method to call the alert(), confirm(), or prompt() function in the browser to handle the alert.
Here is an example:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
driver = webdriver.Firefox(options=options)
//Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
//Use the execute_script() method to handle the alert
driver.execute_script("alert('This is an alert')")
//Continue with the script
reqs = driver.requests
print("Success!")
driver.quit()
| Selenium driver hanging on OS alert | I'm using Selenium in Python (3.11) with a Firefox (107) driver.
With the driver I navigate to a page which, after several actions, triggers an OS alert (prompting me to launch a program). When this alert pops up, the driver hangs, and only once it is closed manually does my script continue to run.
I have tried driver.quit(), as well as using
os.system("taskkill /F /pid " + str(process.ProcessId))
with the driver's PID, with no luck.
I have managed to prevent the pop-up from popping up with
options.set_preference("security.external_protocol_requires_permission", False)
but the code still hangs the same way at the point where the popup would have popped up.
I don't care whether the program launches or not, I just need my code to not require human intervention at this key point.
here is a minimal example of what I currently have:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
#this is the point where the pop up appears
reqs = driver.requests
print("Success!")
driver.quit()
| [
"There are some prefs you can try\nprofile = webdriver.FirefoxProfile()\nprofile.set_preference('dom.push.enabled', False)\n\n# or\n\nprofile = webdriver.FirefoxProfile()\nprofile.set_preference('dom.webnotifications.enabled', False)\nprofile.set_preference('dom.webnotifications.serviceworker.enabled', False)\n\n",
"Have you tried setting this preference to prevent the particular popup:\nprofile.set_preference('browser.helperApps.neverAsk.openFile', 'typeOfFile') \n# e.g. profile.set_preference('browser.helperApps.neverAsk.openFile', 'application/xml,application/octet-stream')\n\nOr have you tried just dismissing the popup:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n....\npass_field.send_keys(Keys.ENTER)\n\n#this is the point where the pop up appears\nWebDriverWait(driver, 5).until(EC.alert_is_present).dismiss()\nreqs = driver.requests\n...\n\n",
"check this checkbox manually then open the app for every app associated to the links you use, then it will work normally.\n\n",
"It sounds like you're having trouble with an alert that pops up when using Selenium with Firefox. One way to handle this is to use the Alert class in Selenium. This class provides methods for accepting, dismissing, or sending keys to the alert.\nHere's an example of how you could use the Alert class to handle the alert in your code:\n# After navigating to the page that triggers the alert\nalert = driver.switch_to.alert\n\n# Use the alert methods to handle the alert as needed\nalert.accept() # This will accept the alert, launching the program\n# or\nalert.dismiss() # This will dismiss the alert, not launching the program\n# or\nalert.send_keys(\"some text\") # This will enter text in the alert and accept it\n\nAlternatively, you could use the unhandled_prompt_behavior option in the FirefoxOptions class to specify how Selenium should handle unhandled alerts. This option takes one of three values:\naccept: Accept the alert (launch the program in your case)\ndismiss: Dismiss the alert (not launch the program in your case)\nignore: Ignore the alert, allowing the script to continue running\nHere's an example of how you could use the unhandled_prompt_behavior option to handle the alert:\nfrom selenium.webdriver.firefox.options import Options\n\n# Set the unhandled_prompt_behavior option to dismiss\noptions = Options()\noptions.binary_location = r'C:\\Program Files\\Mozilla Firefox\\firefox.exe'\noptions.set_preference(\"security.external_protocol_requires_permission\", False)\noptions.unhandled_prompt_behavior = \"dismiss\"\ndriver = webdriver.Firefox(options=options)\n\n# Navigate to the page that triggers the alert\ndriver.get(url)\n\n# The alert should be automatically dismissed by the unhandled_prompt_behavior setting\n\n",
"I believe you should be able to handle the OS alert by calling driver.switch_to.alert.accept() or driver.switch_to.alert.dismiss() after navigating to the page where the alert pops up. This will automatically accept or dismiss the alert and allow your script to continue running. Here is an example of how you could use this in your code:\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Handle the alert by dismissing it\ndriver.switch_to.alert.dismiss()\n\n# Your script can continue running now\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n\nAlternatively, if you don't want to handle the alert and just want to avoid it entirely, you can use the ExpectedConditions class from the selenium.webdriver.support.ui module to wait for the alert to be dismissed before continuing. Here is an example of how you could use this:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Wait for the alert to be dismissed before continuing\nWebDriverWait(driver, 10).until(EC.alert_is_not_present())\n\n# Your script can continue running now\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n\nI hope this helps! Let me know if you have any other questions.\n",
"To handle the OS alert, you can use the WebDriverWait class from the selenium.webdriver.common.by module and the Alert class from the selenium.webdriver.common.alerts module. You can use these classes to wait for the alert to be present and then either accept or dismiss the alert depending on what you want to do.\nHere is an example:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.alerts import Alert\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n# Create an instance of Firefox WebDriver\ndriver = webdriver.Firefox()\n\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(By.ID, \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(By.ID, \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Wait for the alert to be present and handle it\nWebDriverWait(driver, 10).until(EC.alert_is_present())\nalert = Alert(driver)\nalert.accept() # or alert.dismiss() to dismiss the alert\n\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n",
"One way to handle the alert that pops up is to use the Alert class in Selenium. You can use the switch_to_alert() method to switch to the alert, and then use the accept() or dismiss() method to accept or dismiss the alert.\nHere is an example:\nfrom selenium.webdriver import ActionChains, Keys\nfrom selenium.webdriver.firefox.options import Options\nfrom selenium.webdriver.common.alert import Alert\nfrom seleniumwire import webdriver\n\noptions = Options()\noptions.binary_location = r'C:\\Program Files\\Mozilla Firefox\\firefox.exe'\noptions.set_preference(\"security.external_protocol_requires_permission\", False)\ndriver = webdriver.Firefox(options=options)\n\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Handle the alert if it appears\ntry:\n alert = Alert(driver)\n alert.dismiss()\nexcept:\n pass\n\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\nAlternatively, you can use the execute_script() method to handle the alert. This method allows you to execute JavaScript code in the context of the current page. You can use this method to call the alert(), confirm(), or prompt() function in the browser to handle the alert.\nHere is an example:\nfrom selenium.webdriver import ActionChains, Keys\nfrom selenium.webdriver.firefox.options import Options\nfrom seleniumwire import webdriver\n\noptions = Options()\noptions.binary_location = r'C:\\Program Files\\Mozilla Firefox\\firefox.exe'\ndriver = webdriver.Firefox(options=options)\n\n//Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n//Use the execute_script() method to handle the alert\ndriver.execute_script(\"alert('This is an alert')\")\n\n//Continue with the script\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n"
] | [
3,
3,
1,
0,
0,
0,
0
] | [] | [] | [
"python",
"python_3.x",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0074563548_python_python_3.x_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
clickble dropdown responsive with js
im having trouble with the js in this code. I have 2 clickble dropdowns, but only one of them (the first dropdown) is working. I dont know how to fix it.
here's the html part:
<div id="wrap">
<nav>
<div class="logo">
<img src="./photos-docs/ME-marine-logo.png" alt="logo" class="logo" />
</div>
<button type="button" class="btn-hamburger" data-action="nav-toggle">
<span></span>
<span></span>
<span></span>
<span></span>
<span></span>
</button>
<ul class="nav-menu">
<li class="nav-item"><a href="index.html">עמוד ראשי</a></li>
<li class="nav-item dropdown">
<a href="#" data-action="dropdown-toggle">עיסויים </a>
<div class="dropdown-menu">
<a class="dropdown-item" href="#">רפואי</a>
<a class="dropdown-item" href="#">שוודי</a>
<a class="dropdown-item" href="#">רקמות עמוקות</a>
<a class="dropdown-item" href="#">ניקוז לימפטי</a>
<a class="dropdown-item" href="#">אבנים חמות</a>
</div>
</li>
<li class="nav-item dropdown">
<a href="#" data-action="dropdown-toggle">טיפולי פנים </a>
<div class="dropdown-menu">
<a class="dropdown-item" href="#">קלאסי</a>
<a class="dropdown-item" href="#">יופי</a>
<a class="dropdown-item" href="#">אקנה</a>
<a class="dropdown-item" href="#">פילינג</a>
<a class="dropdown-item" href="#">מיצוק</a>
<a class="dropdown-item" href="#">פיגמנטציה</a>
<a class="dropdown-item" href="#">אנטי אייג׳ינג</a>
</div>
</li>
<li class="nav-item"><a href="#">מזותרפיה</a></li>
<li class="nav-item"><a href="#">מיקרובליידינג</a></li>
<li class="nav-item"><a href="#">הזמינו תור</a></li>
<li class="nav-item"><a href="#">צרו קשר</a></li>
<li class="nav-item"><a href="tel:+972547809308">0547809308</a></li>
<li class="nav-item"><a href="https://api.whatsapp.com/send?phone=972547809308"><i class="fa-brands fa-whatsapp"></i></a>
</li>
and here's the js part:
let nav = document.querySelector('nav');
let dropdown = nav.querySelector('.dropdown');
let dropdownToggle = nav.querySelector("[data-action='dropdown-toggle']");
let navToggle = nav.querySelector("[data-action='nav-toggle']");
dropdownToggle.addEventListener('click', () => {
if (dropdown.classList.contains('show')) {
dropdown.classList.remove('show');
} else {
dropdown.classList.add('show');
}
})
navToggle.addEventListener('click', () => {
if (nav.classList.contains('opened')) {
nav.classList.remove('opened');
} else {
nav.classList.add('opened');
}
})
what should I do from here? I know the problem ia in the js but I dont know how to keep going from here, im stuck.
A:
In your code, you are using the querySelector method to select a single .dropdown element, which is why only the first dropdown is working.
You need to use the querySelectorAll method instead, which will return a list of all elements that match the given selector. You can then loop through this list and add the click event listener to each dropdown menu.
let nav = document.querySelector('nav');
let dropdowns = nav.querySelectorAll('.dropdown');
let dropdownToggles = nav.querySelectorAll("[data-action='dropdown-toggle']");
let navToggle = nav.querySelector("[data-action='nav-toggle']");
dropdownToggles.forEach(function(toggle, index) {
toggle.addEventListener('click', function() {
let dropdown = dropdowns[index];
if (dropdown.classList.contains('show')) {
dropdown.classList.remove('show');
} else {
dropdown.classList.add('show');
}
});
});
navToggle.addEventListener('click', () => {
if (nav.classList.contains('opened')) {
nav.classList.remove('opened');
} else {
nav.classList.add('opened');
}
});
A:
Problem is here
let dropdown = nav.querySelector('.dropdown');
https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector
querySelector() returns only first element that matches the specified selector!
Thats why only one dropdown works.
You should use
https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll
And loop through every element to add event listener to them - just like you did in your code.
| clickble dropdown responsive with js | im having trouble with the js in this code. I have 2 clickble dropdowns, but only one of them (the first dropdown) is working. I dont know how to fix it.
here's the html part:
<div id="wrap">
<nav>
<div class="logo">
<img src="./photos-docs/ME-marine-logo.png" alt="logo" class="logo" />
</div>
<button type="button" class="btn-hamburger" data-action="nav-toggle">
<span></span>
<span></span>
<span></span>
<span></span>
<span></span>
</button>
<ul class="nav-menu">
<li class="nav-item"><a href="index.html">עמוד ראשי</a></li>
<li class="nav-item dropdown">
<a href="#" data-action="dropdown-toggle">עיסויים </a>
<div class="dropdown-menu">
<a class="dropdown-item" href="#">רפואי</a>
<a class="dropdown-item" href="#">שוודי</a>
<a class="dropdown-item" href="#">רקמות עמוקות</a>
<a class="dropdown-item" href="#">ניקוז לימפטי</a>
<a class="dropdown-item" href="#">אבנים חמות</a>
</div>
</li>
<li class="nav-item dropdown">
<a href="#" data-action="dropdown-toggle">טיפולי פנים </a>
<div class="dropdown-menu">
<a class="dropdown-item" href="#">קלאסי</a>
<a class="dropdown-item" href="#">יופי</a>
<a class="dropdown-item" href="#">אקנה</a>
<a class="dropdown-item" href="#">פילינג</a>
<a class="dropdown-item" href="#">מיצוק</a>
<a class="dropdown-item" href="#">פיגמנטציה</a>
<a class="dropdown-item" href="#">אנטי אייג׳ינג</a>
</div>
</li>
<li class="nav-item"><a href="#">מזותרפיה</a></li>
<li class="nav-item"><a href="#">מיקרובליידינג</a></li>
<li class="nav-item"><a href="#">הזמינו תור</a></li>
<li class="nav-item"><a href="#">צרו קשר</a></li>
<li class="nav-item"><a href="tel:+972547809308">0547809308</a></li>
<li class="nav-item"><a href="https://api.whatsapp.com/send?phone=972547809308"><i class="fa-brands fa-whatsapp"></i></a>
</li>
and here's the js part:
let nav = document.querySelector('nav');
let dropdown = nav.querySelector('.dropdown');
let dropdownToggle = nav.querySelector("[data-action='dropdown-toggle']");
let navToggle = nav.querySelector("[data-action='nav-toggle']");
dropdownToggle.addEventListener('click', () => {
if (dropdown.classList.contains('show')) {
dropdown.classList.remove('show');
} else {
dropdown.classList.add('show');
}
})
navToggle.addEventListener('click', () => {
if (nav.classList.contains('opened')) {
nav.classList.remove('opened');
} else {
nav.classList.add('opened');
}
})
what should I do from here? I know the problem ia in the js but I dont know how to keep going from here, im stuck.
| [
"In your code, you are using the querySelector method to select a single .dropdown element, which is why only the first dropdown is working.\nYou need to use the querySelectorAll method instead, which will return a list of all elements that match the given selector. You can then loop through this list and add the click event listener to each dropdown menu.\nlet nav = document.querySelector('nav');\nlet dropdowns = nav.querySelectorAll('.dropdown');\nlet dropdownToggles = nav.querySelectorAll(\"[data-action='dropdown-toggle']\");\nlet navToggle = nav.querySelector(\"[data-action='nav-toggle']\");\n\ndropdownToggles.forEach(function(toggle, index) {\n toggle.addEventListener('click', function() {\n let dropdown = dropdowns[index];\n if (dropdown.classList.contains('show')) {\n dropdown.classList.remove('show');\n } else {\n dropdown.classList.add('show');\n }\n });\n});\n\nnavToggle.addEventListener('click', () => {\n if (nav.classList.contains('opened')) {\n nav.classList.remove('opened');\n } else {\n nav.classList.add('opened');\n }\n});\n\n",
"Problem is here\nlet dropdown = nav.querySelector('.dropdown');\n\nhttps://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector\nquerySelector() returns only first element that matches the specified selector!\nThats why only one dropdown works.\nYou should use\nhttps://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll\nAnd loop through every element to add event listener to them - just like you did in your code.\n"
] | [
0,
0
] | [] | [] | [
"click",
"drop_down_menu",
"html",
"javascript",
"navbar"
] | stackoverflow_0074675883_click_drop_down_menu_html_javascript_navbar.txt |
Q:
python: AttributeError: 'list' object has no attribute 'groupby'
I am following a Youtube tutorial on a streamlit application, however the error
"AttributeError: 'list' object has no attribute 'groupby'"
occured when I was trying to group my list that I scraped from wikipedia, the instructor had the exact code as me but didn't face a problem, where am I missing out exactly?
import streamlit as st
import pandas as pd
@st.cache
def load_data():
url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
html = pd.read_html(url, header = 0)
df = html[0]
return df
df = load_data()
df = df.groupby('GICS Sector')
A:
I fixed it, I just had to reassign the df variable to it's first index
import streamlit as st
import pandas as pd
@st.cache
def load_data():
url = "https://en.wikipedia.org/wiki/List_of_S%26P_500_companies"
html = pd.read_html(url, header=0)
df = html[0]
return df
df = load_data()
df = df[0]
df = df.groupby("GICS Sector")
| python: AttributeError: 'list' object has no attribute 'groupby' | I am following a Youtube tutorial on a streamlit application, however the error
"AttributeError: 'list' object has no attribute 'groupby'"
occured when I was trying to group my list that I scraped from wikipedia, the instructor had the exact code as me but didn't face a problem, where am I missing out exactly?
import streamlit as st
import pandas as pd
@st.cache
def load_data():
url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
html = pd.read_html(url, header = 0)
df = html[0]
return df
df = load_data()
df = df.groupby('GICS Sector')
| [
"I fixed it, I just had to reassign the df variable to it's first index\nimport streamlit as st\nimport pandas as pd\n\[email protected]\ndef load_data():\n url = \"https://en.wikipedia.org/wiki/List_of_S%26P_500_companies\"\n html = pd.read_html(url, header=0)\n df = html[0]\n return df\n\ndf = load_data()\ndf = df[0]\ndf = df.groupby(\"GICS Sector\")\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python",
"streamlit"
] | stackoverflow_0074675820_pandas_python_streamlit.txt |
Q:
Parsing an XML file that contains HTML snippets, renaming HTML class names, and then write back the XML file
I've got XML files that contain HTML snippets. I'm trying to write a Python script that opens such an XML file, searches for the elements containing the HTML, renames the classes, and then writes back the new XML file to file.
Here's an XML example:
<?xml version="1.0" encoding="UTF-8"?>
<question_categories>
<question_category id="18883">
<name>templates</name>
<questions>
<question id="1419226">
<parent>0</parent>
<name>_template_master</name>
<questiontext>
<div class="wrapper">
<div class="wrapper element">
<span>Exercise 1</span>
</div>
</div>
</questiontext>
</question>
<question id="1419238">
<parent>0</parent>
<name>_template_singleDropDown</name>
<questiontext>
<div class="wrapper">
<div class="element wrapper">
<span>Exercise 2</span>
</div>
</div>
</questiontext>
</question>
</questions>
</question_category>
</question_categories>
The element containing the HTML is <questiontext>, the HTML class to be renamed is wrapper, and the new class name should be prefixed-wrapper.
I succeeded to loop through the XML, extracting the HTML and also to rename the class, but I don't know how to put everything together, so at the end I get an XML file with the renamed class names.
This is my code so far:
from bs4 import BeautifulSoup
with open('dummy_short.xml', 'r') as f:
file = f.read()
soup_xml = BeautifulSoup(file, 'xml')
for questiontext in soup_xml.find_all('questiontext'):
for singleclass in BeautifulSoup(questiontext.text, 'html.parser').find_all(class_='wrapper'):
pos = singleclass.attrs['class'].index('wrapper')
singleclass.attrs['class'][pos] = 'prefixed-wrapper'
print(soup_xml)
Unfortunately, when printing soup_xml at the end, the contents are unchanged, i.e. the class names aren't renamed.
EDIT: Since one and the same class name can occur in very different and complex contexts (for example along with other classes, i.e. class="xxx yyy wrapper zzz"), a static match isn't working. And instead of using complicated and non-comprehensible regexes, I have to use a parser like beautifulsoup (because they are made exactly for this purpose!).
A:
After your comment I have changed my code a little bit.
Now the html part is correct escaped, but the empty tags are gone. Anyway the XML is valid. It seems tree.write() have some trouble with mixed XML and inserted html sequences.
import xml.etree.ElementTree as ET
from html import escape, unescape
tree = ET.parse('source.xml')
root = tree.getroot()
def replace_html(elem):
dummyXML = ET.fromstring(elem)
for htm in dummyXML.iter('div'):
if htm.tag == "div" and htm.get('class') =="wrapper":
htm.set('class', "prefixed-wrapper")
return ET.tostring(dummyXML, method='html').decode('utf-8')
for elem in root.iter("questiontext"):
html = replace_html(unescape(elem.text))
elem.text = escape(html)
with open('new.xml', 'w') as f:
f.write(f'<?xml version="1.0" encoding="UTF-8"?>')
with open('new.xml', 'a') as f:
f.write(ET.tostring(root).decode('utf-8').replace('&','&'))
The source XML file is "source.xml" and the updated XML file name is "new.xml".
Output (changed part only):
<questiontext>
<div class="prefixed-wrapper">
<div class="wrapper element">
<span>Exercise 1</span>
</div>
</div>
</questiontext>
A:
Option2: Your prefered BeautifulSoup Solution
from bs4 import BeautifulSoup
#from xml.sax.saxutils import quoteattr, escape, unescape
import re
# Get the XML soup
with open('source.xml', 'r') as f:
file = f.read()
soup_xml = BeautifulSoup(file, 'xml')
def soup_htm(elm):
"""Modify attributes according request """
# Get the html soup
soup = BeautifulSoup(elm.string, 'html.parser')
for elem in soup.find_all('div'):
if elem.attrs== {'class': ['wrapper']}:
elem['class'] = ['prefixed-wrapper']
if elem.attrs== {'class': ['wrapper', 'element']}:
elem['class'] = ['prefixed-wrapper', 'element']
if elem.attrs== {'class': ['element', 'wrapper']}:
elem['class'] = ['element', 'prefixed-wrapper']
return re.sub('"','"', str(soup))
# Find element and replace it
for questiontext in soup_xml.find_all('questiontext'):
htm_changed = soup_htm(questiontext)
questiontext = questiontext.string.wrap(soup_xml.new_tag("questiontext")).replace_with(htm_changed)
# Print result
print(soup_xml.prettify())
I prefere the inbuild python, but this is also nice and maybe easier with such mixed XML/HTML documents. Anyway the single/ double quotes makes trouble. Maybe another user can help.
| Parsing an XML file that contains HTML snippets, renaming HTML class names, and then write back the XML file | I've got XML files that contain HTML snippets. I'm trying to write a Python script that opens such an XML file, searches for the elements containing the HTML, renames the classes, and then writes back the new XML file to file.
Here's an XML example:
<?xml version="1.0" encoding="UTF-8"?>
<question_categories>
<question_category id="18883">
<name>templates</name>
<questions>
<question id="1419226">
<parent>0</parent>
<name>_template_master</name>
<questiontext>
<div class="wrapper">
<div class="wrapper element">
<span>Exercise 1</span>
</div>
</div>
</questiontext>
</question>
<question id="1419238">
<parent>0</parent>
<name>_template_singleDropDown</name>
<questiontext>
<div class="wrapper">
<div class="element wrapper">
<span>Exercise 2</span>
</div>
</div>
</questiontext>
</question>
</questions>
</question_category>
</question_categories>
The element containing the HTML is <questiontext>, the HTML class to be renamed is wrapper, and the new class name should be prefixed-wrapper.
I succeeded to loop through the XML, extracting the HTML and also to rename the class, but I don't know how to put everything together, so at the end I get an XML file with the renamed class names.
This is my code so far:
from bs4 import BeautifulSoup
with open('dummy_short.xml', 'r') as f:
file = f.read()
soup_xml = BeautifulSoup(file, 'xml')
for questiontext in soup_xml.find_all('questiontext'):
for singleclass in BeautifulSoup(questiontext.text, 'html.parser').find_all(class_='wrapper'):
pos = singleclass.attrs['class'].index('wrapper')
singleclass.attrs['class'][pos] = 'prefixed-wrapper'
print(soup_xml)
Unfortunately, when printing soup_xml at the end, the contents are unchanged, i.e. the class names aren't renamed.
EDIT: Since one and the same class name can occur in very different and complex contexts (for example along with other classes, i.e. class="xxx yyy wrapper zzz"), a static match isn't working. And instead of using complicated and non-comprehensible regexes, I have to use a parser like beautifulsoup (because they are made exactly for this purpose!).
| [
"After your comment I have changed my code a little bit.\nNow the html part is correct escaped, but the empty tags are gone. Anyway the XML is valid. It seems tree.write() have some trouble with mixed XML and inserted html sequences.\nimport xml.etree.ElementTree as ET\nfrom html import escape, unescape\n\ntree = ET.parse('source.xml')\nroot = tree.getroot()\n\ndef replace_html(elem):\n dummyXML = ET.fromstring(elem)\n for htm in dummyXML.iter('div'):\n if htm.tag == \"div\" and htm.get('class') ==\"wrapper\":\n htm.set('class', \"prefixed-wrapper\") \n return ET.tostring(dummyXML, method='html').decode('utf-8')\n \nfor elem in root.iter(\"questiontext\"):\n html = replace_html(unescape(elem.text))\n elem.text = escape(html)\n \nwith open('new.xml', 'w') as f:\n f.write(f'<?xml version=\"1.0\" encoding=\"UTF-8\"?>')\n\nwith open('new.xml', 'a') as f:\n f.write(ET.tostring(root).decode('utf-8').replace('&','&'))\n\nThe source XML file is \"source.xml\" and the updated XML file name is \"new.xml\".\nOutput (changed part only):\n<questiontext>\n <div class="prefixed-wrapper">\n <div class="wrapper element">\n <span>Exercise 1</span>\n </div>\n </div>\n</questiontext>\n\n",
"Option2: Your prefered BeautifulSoup Solution\nfrom bs4 import BeautifulSoup\n#from xml.sax.saxutils import quoteattr, escape, unescape\nimport re\n\n# Get the XML soup\nwith open('source.xml', 'r') as f:\n file = f.read() \nsoup_xml = BeautifulSoup(file, 'xml')\n\ndef soup_htm(elm):\n \"\"\"Modify attributes according request \"\"\"\n # Get the html soup\n soup = BeautifulSoup(elm.string, 'html.parser')\n \n \n for elem in soup.find_all('div'):\n if elem.attrs== {'class': ['wrapper']}:\n elem['class'] = ['prefixed-wrapper']\n if elem.attrs== {'class': ['wrapper', 'element']}:\n elem['class'] = ['prefixed-wrapper', 'element']\n if elem.attrs== {'class': ['element', 'wrapper']}:\n elem['class'] = ['element', 'prefixed-wrapper'] \n return re.sub('\"','"', str(soup))\n\n# Find element and replace it \nfor questiontext in soup_xml.find_all('questiontext'):\n htm_changed = soup_htm(questiontext)\n questiontext = questiontext.string.wrap(soup_xml.new_tag(\"questiontext\")).replace_with(htm_changed)\n \n# Print result\nprint(soup_xml.prettify())\n\nI prefere the inbuild python, but this is also nice and maybe easier with such mixed XML/HTML documents. Anyway the single/ double quotes makes trouble. Maybe another user can help.\n"
] | [
1,
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"xml"
] | stackoverflow_0074669395_beautifulsoup_html_python_xml.txt |
Q:
How to transform a xml response to the JSON array in wso2 EI
I'm getting an XML response and need to convert it to the JSON array.
XML response is as below:
<jsonObject>
<message>
<status>Success</status>
<timestam>2022-12-04T17:51:15.9841813+11:00</timestam>
<resultCount>35</resultCount>
<totalCount>35</totalCount>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>97282A08</text>
<value>97282A08</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>185804A09</text>
<value>185804A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>241248A09</text>
<value>241248A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>258111A09</text>
<value>258111A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>429398A11</text>
<value>429398A11</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>450962A11</text>
<value>450962A11</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>588602A12</text>
<value>588602A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>618329A12</text>
<value>618329A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>624645A12</text>
<value>624645A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>643029A12</text>
<value>643029A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>655593A12</text>
<value>655593A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>684292A12</text>
<value>684292A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>903240A14</text>
<value>903240A14</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1031807A15</text>
<value>1031807A15</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1353624A17</text>
<value>1353624A17</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1353626A17</text>
<value>1353626A17</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1436375A18</text>
<value>1436375A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1455356A18</text>
<value>1455356A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1500185A18</text>
<value>1500185A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1511985A18</text>
<value>1511985A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1625059A19</text>
<value>1625059A19</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1630914A19</text>
<value>1630914A19</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1741745A20</text>
<value>1741745A20</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1878082A21</text>
<value>1878082A21</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>2061825A22</text>
<value>2061825A22</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>2061829A22</text>
<value>2061829A22</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>2061830A22</text>
<value>2061830A22</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>5067/1993</text>
<value>5067/1993</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>497/1998</text>
<value>497/1998</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>954/1998</text>
<value>954/1998</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>206A09</text>
<value>206A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>34A09</text>
<value>34A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>15187A09</text>
<value>15187A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>15188A09</text>
<value>15188A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>18122A04</text>
<value>18122A04</value>
</listItems>
</AlicationNumber>
</fields>
</message>
I'm using messageType "application/JSON" and I get the result below:
{
"message": {
"status": "Success",
"timestam": "2022-12-04T17:51:15.9841813+11:00",
"resultCount": 35,
"totalCount": 35,
"fields": [
{
"AlicationNumber": {
"listItems": {
"text": "97282A08",
"value": "97282A08"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "185804A09",
"value": "185804A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "241248A09",
"value": "241248A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "258111A09",
"value": "258111A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "429398A11",
"value": "429398A11"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "450962A11",
"value": "450962A11"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "588602A12",
"value": "588602A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "618329A12",
"value": "618329A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "624645A12",
"value": "624645A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "643029A12",
"value": "643029A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "655593A12",
"value": "655593A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "684292A12",
"value": "684292A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "903240A14",
"value": "903240A14"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1031807A15",
"value": "1031807A15"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1353624A17",
"value": "1353624A17"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1353626A17",
"value": "1353626A17"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1436375A18",
"value": "1436375A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1455356A18",
"value": "1455356A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1500185A18",
"value": "1500185A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1511985A18",
"value": "1511985A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1625059A19",
"value": "1625059A19"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1630914A19",
"value": "1630914A19"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1741745A20",
"value": "1741745A20"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1878082A21",
"value": "1878082A21"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "2061825A22",
"value": "2061825A22"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "2061829A22",
"value": "2061829A22"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "2061830A22",
"value": "2061830A22"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "5067/1993",
"value": "5067/1993"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "497/1998",
"value": "497/1998"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "954/1998",
"value": "954/1998"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "206A09",
"value": "206A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "34A09",
"value": "34A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "15187A09",
"value": "15187A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "15188A09",
"value": "15188A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "18122A04",
"value": "18122A04"
}
}
}
]
}
}
But the format that I need to get is as below:
{
"message": {
"status": "Success",
"timestam": "2022-12-04T17:51:15.9841813+11:00",
"resultCount": 35,
"totalCount": 35,
"fields": [
{
"AlicationNumber": {
"listItems": [
{
"text": "97282A08",
"value": "97282A08"
},
{
"text": "185804A09",
"value": "185804A09"
},
{
"text": "241248A09",
"value": "241248A09"
},
{"text": "258111A09",
"value": "258111A09"
},
...
]
}
}
]
}}
I have used JSON transformer as well but didn't change the response format.
I used the below configuration for the JSON transformer
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"fields": {
"type": "object",
"properties": {
"ApplicationNumber": {
"type": "object"
},
"listItems":{
"type": "Array",
"properties":{
"text":{
"type":"string"
},
"value":{
"type":"string"
}
}
}
}
}
}
}
Can you please help me with how can I achieve this?
I'm using wso2 integration studio 7.2.
A:
You can use the PayloadFactory Mediator for this. Take a look at the example below.
<payloadFactory media-type="json">
<format>{
"message": {
"status": "$1",
"timestam": "$2",
"resultCount": $3,
"totalCount": $4,
"fields": [
{
"AlicationNumber": $5
}
]
}}
</format>
<args>
<arg evaluator="xml" expression="//message/status" />
<arg evaluator="xml" expression="//message/timestam" />
<arg evaluator="xml" expression="//message/resultCount" />
<arg evaluator="xml" expression="//message/totalCount" />
<arg evaluator="xml" expression="//sy:AlicationNumber/sy:listItems" xmlns:sy="htt://ws.aache.org/ns/synase"/>
</args>
</payloadFactory>
Complete API
<?xml version="1.0" encoding="UTF-8"?>
<api context="/jsontest" name="HelloWorld" xmlns="http://ws.apache.org/ns/synapse">
<resource methods="POST">
<inSequence>
<payloadFactory media-type="json">
<format>{
"message": {
"status": "$1",
"timestam": "$2",
"resultCount": $3,
"totalCount": $4,
"fields": [
{
"AlicationNumber": $5
}
]
}}
</format>
<args>
<arg evaluator="xml" expression="//message/status"/>
<arg evaluator="xml" expression="//message/timestam"/>
<arg evaluator="xml" expression="//message/resultCount"/>
<arg evaluator="xml" expression="//message/totalCount"/>
<arg evaluator="xml" expression="//sy:AlicationNumber/sy:listItems" xmlns:sy="htt://ws.aache.org/ns/synase"/>
</args>
</payloadFactory>
<log category="DEBUG" level="full"/>
<respond/>
</inSequence>
<outSequence/>
<faultSequence/>
</resource>
</api>
Reuest
curl --location --request POST 'http://localhost:8290/jsontest' \
--header 'Content-Type: application/xml' \
--data-raw '<jsonObject>
<message>
<status>Success</status>
<timestam>2022-12-04T17:51:15.9841813+11:00</timestam>
<resultCount>35</resultCount>
<totalCount>35</totalCount>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>97282A08</text>
<value>97282A08</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>185804A09</text>
<value>185804A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>241248A09</text>
<value>241248A09</value>
</listItems>
</AlicationNumber>
</fields>
</message>
</jsonObject>'
Response
{
"message": {
"status": "Success",
"timestam": "2022-12-04T17:51:15.9841813+11:00",
"resultCount": 35,
"totalCount": 35,
"fields": [
{
"AlicationNumber": {
"listItems": [
{
"text": "97282A08",
"value": "97282A08"
},
{
"text": "185804A09",
"value": "185804A09"
},
{
"text": "241248A09",
"value": "241248A09"
}
]
}
}
]
}
}
| How to transform a xml response to the JSON array in wso2 EI | I'm getting an XML response and need to convert it to the JSON array.
XML response is as below:
<jsonObject>
<message>
<status>Success</status>
<timestam>2022-12-04T17:51:15.9841813+11:00</timestam>
<resultCount>35</resultCount>
<totalCount>35</totalCount>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>97282A08</text>
<value>97282A08</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>185804A09</text>
<value>185804A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>241248A09</text>
<value>241248A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>258111A09</text>
<value>258111A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>429398A11</text>
<value>429398A11</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>450962A11</text>
<value>450962A11</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>588602A12</text>
<value>588602A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>618329A12</text>
<value>618329A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>624645A12</text>
<value>624645A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>643029A12</text>
<value>643029A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>655593A12</text>
<value>655593A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>684292A12</text>
<value>684292A12</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>903240A14</text>
<value>903240A14</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1031807A15</text>
<value>1031807A15</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1353624A17</text>
<value>1353624A17</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1353626A17</text>
<value>1353626A17</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1436375A18</text>
<value>1436375A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1455356A18</text>
<value>1455356A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1500185A18</text>
<value>1500185A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1511985A18</text>
<value>1511985A18</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1625059A19</text>
<value>1625059A19</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1630914A19</text>
<value>1630914A19</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1741745A20</text>
<value>1741745A20</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>1878082A21</text>
<value>1878082A21</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>2061825A22</text>
<value>2061825A22</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>2061829A22</text>
<value>2061829A22</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>2061830A22</text>
<value>2061830A22</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>5067/1993</text>
<value>5067/1993</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>497/1998</text>
<value>497/1998</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>954/1998</text>
<value>954/1998</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>206A09</text>
<value>206A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>34A09</text>
<value>34A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>15187A09</text>
<value>15187A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>15188A09</text>
<value>15188A09</value>
</listItems>
</AlicationNumber>
</fields>
<fields
xmlns="htt://ws.aache.org/ns/synase">
<AlicationNumber>
<listItems>
<text>18122A04</text>
<value>18122A04</value>
</listItems>
</AlicationNumber>
</fields>
</message>
I'm using messageType "application/JSON" and I get the result below:
{
"message": {
"status": "Success",
"timestam": "2022-12-04T17:51:15.9841813+11:00",
"resultCount": 35,
"totalCount": 35,
"fields": [
{
"AlicationNumber": {
"listItems": {
"text": "97282A08",
"value": "97282A08"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "185804A09",
"value": "185804A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "241248A09",
"value": "241248A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "258111A09",
"value": "258111A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "429398A11",
"value": "429398A11"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "450962A11",
"value": "450962A11"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "588602A12",
"value": "588602A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "618329A12",
"value": "618329A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "624645A12",
"value": "624645A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "643029A12",
"value": "643029A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "655593A12",
"value": "655593A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "684292A12",
"value": "684292A12"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "903240A14",
"value": "903240A14"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1031807A15",
"value": "1031807A15"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1353624A17",
"value": "1353624A17"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1353626A17",
"value": "1353626A17"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1436375A18",
"value": "1436375A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1455356A18",
"value": "1455356A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1500185A18",
"value": "1500185A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1511985A18",
"value": "1511985A18"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1625059A19",
"value": "1625059A19"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1630914A19",
"value": "1630914A19"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1741745A20",
"value": "1741745A20"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "1878082A21",
"value": "1878082A21"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "2061825A22",
"value": "2061825A22"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "2061829A22",
"value": "2061829A22"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "2061830A22",
"value": "2061830A22"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "5067/1993",
"value": "5067/1993"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "497/1998",
"value": "497/1998"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "954/1998",
"value": "954/1998"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "206A09",
"value": "206A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "34A09",
"value": "34A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "15187A09",
"value": "15187A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "15188A09",
"value": "15188A09"
}
}
},
{
"AlicationNumber": {
"listItems": {
"text": "18122A04",
"value": "18122A04"
}
}
}
]
}
}
But the format that I need to get is as below:
{
"message": {
"status": "Success",
"timestam": "2022-12-04T17:51:15.9841813+11:00",
"resultCount": 35,
"totalCount": 35,
"fields": [
{
"AlicationNumber": {
"listItems": [
{
"text": "97282A08",
"value": "97282A08"
},
{
"text": "185804A09",
"value": "185804A09"
},
{
"text": "241248A09",
"value": "241248A09"
},
{"text": "258111A09",
"value": "258111A09"
},
...
]
}
}
]
}}
I have used JSON transformer as well but didn't change the response format.
I used the below configuration for the JSON transformer
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"fields": {
"type": "object",
"properties": {
"ApplicationNumber": {
"type": "object"
},
"listItems":{
"type": "Array",
"properties":{
"text":{
"type":"string"
},
"value":{
"type":"string"
}
}
}
}
}
}
}
Can you please help me with how can I achieve this?
I'm using wso2 integration studio 7.2.
| [
"You can use the PayloadFactory Mediator for this. Take a look at the example below.\n<payloadFactory media-type=\"json\">\n <format>{\n \"message\": {\n \"status\": \"$1\",\n \"timestam\": \"$2\",\n \"resultCount\": $3,\n \"totalCount\": $4,\n \"fields\": [\n {\n \"AlicationNumber\": $5 \n }\n ]\n }}\n </format>\n <args>\n <arg evaluator=\"xml\" expression=\"//message/status\" />\n <arg evaluator=\"xml\" expression=\"//message/timestam\" />\n <arg evaluator=\"xml\" expression=\"//message/resultCount\" />\n <arg evaluator=\"xml\" expression=\"//message/totalCount\" />\n <arg evaluator=\"xml\" expression=\"//sy:AlicationNumber/sy:listItems\" xmlns:sy=\"htt://ws.aache.org/ns/synase\"/>\n </args>\n</payloadFactory>\n\nComplete API\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<api context=\"/jsontest\" name=\"HelloWorld\" xmlns=\"http://ws.apache.org/ns/synapse\">\n <resource methods=\"POST\">\n <inSequence>\n <payloadFactory media-type=\"json\">\n <format>{\n \"message\": {\n \"status\": \"$1\",\n \"timestam\": \"$2\",\n \"resultCount\": $3,\n \"totalCount\": $4,\n \"fields\": [\n {\n \"AlicationNumber\": $5 \n }\n ]\n }}\n </format>\n <args>\n <arg evaluator=\"xml\" expression=\"//message/status\"/>\n <arg evaluator=\"xml\" expression=\"//message/timestam\"/>\n <arg evaluator=\"xml\" expression=\"//message/resultCount\"/>\n <arg evaluator=\"xml\" expression=\"//message/totalCount\"/>\n <arg evaluator=\"xml\" expression=\"//sy:AlicationNumber/sy:listItems\" xmlns:sy=\"htt://ws.aache.org/ns/synase\"/>\n </args>\n </payloadFactory>\n <log category=\"DEBUG\" level=\"full\"/>\n <respond/>\n </inSequence>\n <outSequence/>\n <faultSequence/>\n </resource>\n</api>\n\nReuest\ncurl --location --request POST 'http://localhost:8290/jsontest' \\\n--header 'Content-Type: application/xml' \\\n--data-raw '<jsonObject>\n <message>\n <status>Success</status>\n <timestam>2022-12-04T17:51:15.9841813+11:00</timestam>\n <resultCount>35</resultCount>\n <totalCount>35</totalCount>\n <fields\n xmlns=\"htt://ws.aache.org/ns/synase\">\n <AlicationNumber>\n <listItems>\n <text>97282A08</text>\n <value>97282A08</value>\n </listItems>\n </AlicationNumber>\n </fields>\n <fields\n xmlns=\"htt://ws.aache.org/ns/synase\">\n <AlicationNumber>\n <listItems>\n <text>185804A09</text>\n <value>185804A09</value>\n </listItems>\n </AlicationNumber>\n </fields>\n <fields\n xmlns=\"htt://ws.aache.org/ns/synase\">\n <AlicationNumber>\n <listItems>\n <text>241248A09</text>\n <value>241248A09</value>\n </listItems>\n </AlicationNumber>\n </fields>\n </message>\n</jsonObject>'\n\nResponse\n{\n \"message\": {\n \"status\": \"Success\",\n \"timestam\": \"2022-12-04T17:51:15.9841813+11:00\",\n \"resultCount\": 35,\n \"totalCount\": 35,\n \"fields\": [\n {\n \"AlicationNumber\": {\n \"listItems\": [\n {\n \"text\": \"97282A08\",\n \"value\": \"97282A08\"\n },\n {\n \"text\": \"185804A09\",\n \"value\": \"185804A09\"\n },\n {\n \"text\": \"241248A09\",\n \"value\": \"241248A09\"\n }\n ]\n }\n }\n ]\n }\n}\n\n"
] | [
0
] | [] | [] | [
"json",
"wso2",
"wso2_enterprise_integrator",
"wso2_esb",
"wso2_integration_studio"
] | stackoverflow_0074673524_json_wso2_wso2_enterprise_integrator_wso2_esb_wso2_integration_studio.txt |
Q:
arithmetic operations on a columns of a data set, using the index of a row as a variable in pandas
So i'm not using python on my day to day basis so this is kind of new to me, but I have large amount of csv files to edit and I imagine a simple script can save me a lot of time.
suppose I have a table
input data
I want to create a new table that perform the following operation on each set y[i] (a column in the table)
z[i] = (y[i]-y[0])/(y[5]-y[0]) - i
So far I have some issue including the index i (the row index) in the arithmetic operation
What I've managed so far:
`
import pandas as pd
#import data file
csv_in = pd.read_csv('data.csv')
#creating the denominator
lsb = csv_in.iloc[5] - csv_in.iloc[0]
#here i'm missing -i in the end
inl = (csv_in.iloc[0] + csv_in)/lsb - csv_in.index.to_series()
print(inl)
So i'm wondering if there is a way to do it with a one liner like this? the
csv_temp.index.to_series()
didn't work, I assume i'm messing with the dimensions of the arrays i'm trying to operate on. do I have to do some kind of a loop?
the result should be
output data
Thanks!
A:
so for now i'm doing it with a loop
i don't think it is the most efficient way though.
but calling each column in a loop and performing the operation on a series of the indexes does the trick
for i in range(csv_in.shape[1]):
csv_in[csv_in.columns[i]] = csv_in[csv_in.columns[i]] - csv_in.index.to_series()
still would like to know if there is a more clever way to do it
| arithmetic operations on a columns of a data set, using the index of a row as a variable in pandas | So i'm not using python on my day to day basis so this is kind of new to me, but I have large amount of csv files to edit and I imagine a simple script can save me a lot of time.
suppose I have a table
input data
I want to create a new table that perform the following operation on each set y[i] (a column in the table)
z[i] = (y[i]-y[0])/(y[5]-y[0]) - i
So far I have some issue including the index i (the row index) in the arithmetic operation
What I've managed so far:
`
import pandas as pd
#import data file
csv_in = pd.read_csv('data.csv')
#creating the denominator
lsb = csv_in.iloc[5] - csv_in.iloc[0]
#here i'm missing -i in the end
inl = (csv_in.iloc[0] + csv_in)/lsb - csv_in.index.to_series()
print(inl)
So i'm wondering if there is a way to do it with a one liner like this? the
csv_temp.index.to_series()
didn't work, I assume i'm messing with the dimensions of the arrays i'm trying to operate on. do I have to do some kind of a loop?
the result should be
output data
Thanks!
| [
"so for now i'm doing it with a loop\ni don't think it is the most efficient way though.\nbut calling each column in a loop and performing the operation on a series of the indexes does the trick\nfor i in range(csv_in.shape[1]):\n csv_in[csv_in.columns[i]] = csv_in[csv_in.columns[i]] - csv_in.index.to_series()\n\nstill would like to know if there is a more clever way to do it\n"
] | [
0
] | [] | [] | [
"arithmetic_expressions",
"dataframe",
"pandas",
"python_3.x"
] | stackoverflow_0074646967_arithmetic_expressions_dataframe_pandas_python_3.x.txt |
Q:
How to convert a slice of maps to a slice of structs with different properties
I am working with an api and I need to pass it a slice of structs.
I have a slice of maps so I need to convert it to a slice of structs.
package main
import "fmt"
func main() {
a := []map[string]interface{}{}
b := make(map[string]interface{})
c := make(map[string]interface{})
b["Prop1"] = "Foo"
b["Prop2"] = "Bar"
a = append(a, b)
c["Prop3"] = "Baz"
c["Prop4"] = "Foobar"
a = append(a, c)
fmt.Println(a)
}
[map[Prop1:Foo Prop2:Bar] map[Prop3:Baz Prop4:Foobar]]
so in this example, I have the slice of maps a, which contains b and c which are maps of strings with different keys.
I'm looking to convert a to a slice of structs where the first element is a struct with Prop1 and Prop2 as properties, and where the second element is a struct with Prop3 and Prop4 as properties.
Is this possible?
I've looked at https://github.com/mitchellh/mapstructure but I wasn't able to get it working for my use case. I've looked at this answer:
https://stackoverflow.com/a/26746461/3390419
which explains how to use the library:
mapstructure.Decode(myData, &result)
however this seems to assume that the struct of which result is an instance is predefined, whereas in my case the structure is dynamic.
A:
What you can do is to first loop over each map individually, using the key-value pairs of each map you construct a corresponding slice of reflect.StructField values. Once you have such a slice ready you can pass it to reflect.StructOf, that will return a reflect.Type value that represents the dynamic struct type, you can then pass that to reflect.New to create a reflect.Value which will represent an instance of the dynamic struct (actually pointer to the struct).
E.g.
var result []any
for _, m := range a {
fields := make([]reflect.StructField, 0, len(m))
for k, v := range m {
f := reflect.StructField{
Name: k,
Type: reflect.TypeOf(v), // allow for other types, not just strings
}
fields = append(fields, f)
}
st := reflect.StructOf(fields) // new struct type
sv := reflect.New(st) // new struct value
for k, v := range m {
sv.Elem(). // dereference struct pointer
FieldByName(k). // get the relevant field
Set(reflect.ValueOf(v)) // set the value of the field
}
result = append(result, sv.Interface())
}
https://go.dev/play/p/NzHQzKwhwLH
| How to convert a slice of maps to a slice of structs with different properties | I am working with an api and I need to pass it a slice of structs.
I have a slice of maps so I need to convert it to a slice of structs.
package main
import "fmt"
func main() {
a := []map[string]interface{}{}
b := make(map[string]interface{})
c := make(map[string]interface{})
b["Prop1"] = "Foo"
b["Prop2"] = "Bar"
a = append(a, b)
c["Prop3"] = "Baz"
c["Prop4"] = "Foobar"
a = append(a, c)
fmt.Println(a)
}
[map[Prop1:Foo Prop2:Bar] map[Prop3:Baz Prop4:Foobar]]
so in this example, I have the slice of maps a, which contains b and c which are maps of strings with different keys.
I'm looking to convert a to a slice of structs where the first element is a struct with Prop1 and Prop2 as properties, and where the second element is a struct with Prop3 and Prop4 as properties.
Is this possible?
I've looked at https://github.com/mitchellh/mapstructure but I wasn't able to get it working for my use case. I've looked at this answer:
https://stackoverflow.com/a/26746461/3390419
which explains how to use the library:
mapstructure.Decode(myData, &result)
however this seems to assume that the struct of which result is an instance is predefined, whereas in my case the structure is dynamic.
| [
"What you can do is to first loop over each map individually, using the key-value pairs of each map you construct a corresponding slice of reflect.StructField values. Once you have such a slice ready you can pass it to reflect.StructOf, that will return a reflect.Type value that represents the dynamic struct type, you can then pass that to reflect.New to create a reflect.Value which will represent an instance of the dynamic struct (actually pointer to the struct).\nE.g.\nvar result []any\nfor _, m := range a {\n fields := make([]reflect.StructField, 0, len(m))\n\n for k, v := range m {\n f := reflect.StructField{\n Name: k,\n Type: reflect.TypeOf(v), // allow for other types, not just strings\n }\n fields = append(fields, f)\n }\n\n st := reflect.StructOf(fields) // new struct type\n sv := reflect.New(st) // new struct value\n\n for k, v := range m {\n sv.Elem(). // dereference struct pointer\n FieldByName(k). // get the relevant field\n Set(reflect.ValueOf(v)) // set the value of the field\n }\n\n result = append(result, sv.Interface())\n}\n\nhttps://go.dev/play/p/NzHQzKwhwLH\n"
] | [
1
] | [] | [] | [
"go"
] | stackoverflow_0074670325_go.txt |
Q:
Chrome out of memory
I just updated my windows 11. I can't open chrome because out of memory, i watch many tutorial and now my chrome can't open. How to fix it?
I have try so many tutorial and it just became more sucks
A:
After the Chrome update, it gives an out of memory error to the pages.
| Chrome out of memory | I just updated my windows 11. I can't open chrome because out of memory, i watch many tutorial and now my chrome can't open. How to fix it?
I have try so many tutorial and it just became more sucks
| [
"After the Chrome update, it gives an out of memory error to the pages.\n"
] | [
0
] | [] | [] | [
"freeze",
"lag",
"memory",
"settings",
"windows"
] | stackoverflow_0074580586_freeze_lag_memory_settings_windows.txt |
Q:
SQLite using java, UPDATE statement
try {
connection = openConnection();
String sqlText = "UPDATE Student SET firstname = ?, lastname = ?, streetaddress = ?, postcode = ?, postoffice = ? WHERE id = ?";
preparedStatement = connection.prepareStatement(sqlText);
preparedStatement.setString(1, student.getFirstname());
preparedStatement.setString(2, student.getLastname());
preparedStatement.setString(3, student.getStreetaddress());
preparedStatement.setString(4, student.getPostcode());
preparedStatement.setString(5, student.getPostoffice());
preparedStatement.setInt(6, student.getId());
preparedStatement.executeUpdate();
errorCode = 0;
} catch (SQLException sqle) {
if (sqle.getErrorCode() == ConnectionParameters.PK_VIOLATION_ERROR) {
errorCode = 1;
} else {
System.out.println("\n[ERROR] MovieDAO: insertMovie() failed. " + sqle.getMessage() + "\n");
errorCode = -1;
}
}
finally {
DbUtils.closeQuietly(preparedStatement, connection);
}
I need to make update method in my database. The code above is what I used but it does not fail even when PK doesn't match. It either updates the Student info if the id matches or nothing but does not give me the error.
A:
You should also add a check to your code to make sure that the UPDATE statement actually updated a row. You can do this by calling the executeUpdate() method on your PreparedStatement object and checking the return value. This method returns the number of rows that were updated by the UPDATE statement. If the return value is 0, that means no rows were updated, which could indicate that the primary key value you provided did not match any rows in the table.
Here is an example of how you can update your code to check for this:
try {
connection = openConnection();
String sqlText = "UPDATE Student SET firstname = ?, lastname = ?, streetaddress = ?, postcode = ?, postoffice = ? WHERE id = ?";
preparedStatement = connection.prepareStatement(sqlText);
preparedStatement.setString(1, student.getFirstname());
preparedStatement.setString(2, student.getLastname());
preparedStatement.setString(3, student.getStreetaddress());
preparedStatement.setString(4, student.getPostcode());
preparedStatement.setString(5, student.getPostoffice());
preparedStatement.setInt(6, student.getId());
// Check the number of rows that were updated by the UPDATE statement
int rowsUpdated = preparedStatement.executeUpdate();
if (rowsUpdated == 0) {
// No rows were updated, which means the provided primary key value did not match any rows in the table
errorCode = 1;
} else {
// The UPDATE statement was successful
errorCode = 0;
}
} catch (SQLException sqle) {
System.out.println("\n[ERROR] MovieDAO: insertMovie() failed. " + sqle.getMessage() + "\n");
errorCode = -1;
} finally {
DbUtils.closeQuietly(preparedStatement, connection);
}
Hope this helps!
| SQLite using java, UPDATE statement | try {
connection = openConnection();
String sqlText = "UPDATE Student SET firstname = ?, lastname = ?, streetaddress = ?, postcode = ?, postoffice = ? WHERE id = ?";
preparedStatement = connection.prepareStatement(sqlText);
preparedStatement.setString(1, student.getFirstname());
preparedStatement.setString(2, student.getLastname());
preparedStatement.setString(3, student.getStreetaddress());
preparedStatement.setString(4, student.getPostcode());
preparedStatement.setString(5, student.getPostoffice());
preparedStatement.setInt(6, student.getId());
preparedStatement.executeUpdate();
errorCode = 0;
} catch (SQLException sqle) {
if (sqle.getErrorCode() == ConnectionParameters.PK_VIOLATION_ERROR) {
errorCode = 1;
} else {
System.out.println("\n[ERROR] MovieDAO: insertMovie() failed. " + sqle.getMessage() + "\n");
errorCode = -1;
}
}
finally {
DbUtils.closeQuietly(preparedStatement, connection);
}
I need to make update method in my database. The code above is what I used but it does not fail even when PK doesn't match. It either updates the Student info if the id matches or nothing but does not give me the error.
| [
"You should also add a check to your code to make sure that the UPDATE statement actually updated a row. You can do this by calling the executeUpdate() method on your PreparedStatement object and checking the return value. This method returns the number of rows that were updated by the UPDATE statement. If the return value is 0, that means no rows were updated, which could indicate that the primary key value you provided did not match any rows in the table.\nHere is an example of how you can update your code to check for this:\ntry {\n connection = openConnection();\n String sqlText = \"UPDATE Student SET firstname = ?, lastname = ?, streetaddress = ?, postcode = ?, postoffice = ? WHERE id = ?\";\n preparedStatement = connection.prepareStatement(sqlText);\n preparedStatement.setString(1, student.getFirstname());\n preparedStatement.setString(2, student.getLastname());\n preparedStatement.setString(3, student.getStreetaddress());\n preparedStatement.setString(4, student.getPostcode());\n preparedStatement.setString(5, student.getPostoffice());\n preparedStatement.setInt(6, student.getId());\n\n // Check the number of rows that were updated by the UPDATE statement\n int rowsUpdated = preparedStatement.executeUpdate();\n if (rowsUpdated == 0) {\n // No rows were updated, which means the provided primary key value did not match any rows in the table\n errorCode = 1;\n } else {\n // The UPDATE statement was successful\n errorCode = 0;\n }\n} catch (SQLException sqle) {\n System.out.println(\"\\n[ERROR] MovieDAO: insertMovie() failed. \" + sqle.getMessage() + \"\\n\");\n errorCode = -1;\n} finally {\n DbUtils.closeQuietly(preparedStatement, connection);\n}\n\n\nHope this helps!\n"
] | [
0
] | [] | [] | [
"java",
"sqlite"
] | stackoverflow_0074675230_java_sqlite.txt |
Q:
Optional - Unhandled exception using orElseThrow()
I want to throw a custom exception in case parsing fails, but the compiler complains that there is an unhandled ParserException that comes from the parse() method.
But What am I missing?
My code:
public void validateConstraints(RequestType body) {
SimpleDateFormat simpleDateFormatYearMonth = new SimpleDateFormat("yyyy-MM-dd");
Date date = Optional
.ofNullable(body.date())
.map(simpleDateFormatYearMonth::parse)
.orElseThrow(() -> new InvalidCustomException(""));
}
A:
TL;DR
Your apparent issue stems from the fact java.util.Function as well as many other functional interfaces doesn't declare to throw any checked Exceptions in its abstract method. For that reason, implementations can't violate the contract providing behavior which is less safe than declared.
Here's a quote from the Java Language Specification §8.4.8.3. Requirements in Overriding and Hiding:
For every checked exception type listed in the throws clause of m2, that same exception class or one of its supertypes must occur in the erasure (§4.6) of the throws clause of m1; otherwise, a compile-time error occurs.
With that being said, you can't propagate a checked exception outside the lambda or method reference (since it's not declared, it should be handled right on the spot).
Creating an Optional only for the purpose of hiding null-check, and chaining methods on it is a misuse of Optional since it goes against its design goal.
java.util.Date (and java.sql.Date) as well as SimpleDateFormat are obsolete since Java 8 (reminder: this version was released more than 10 years ago). As a replacement we have a new Time API, represented by classes that reside in java.time package, like Instant, LocalDateTime, DateTimeFormatter, etc.
Avoid using Optional to replace Null-checks
The design goal of the Optional is to serve as a return type, and its method ofNullable() is supposed to wrap a nullable return value, not to perform validation.
You might be interested in reading:
Should Optional.ofNullable() be used for null check?
Valid usage of Optional type in Java 8
Here's a short quote from the linked above answer by @StuartMarks, Java and OpenJDK developer:
A typical code smell is, instead of the code using method chaining to handle an Optional returned from some method, it creates an Optional from something that's nullable, in order to chain methods and avoid conditionals.
To validate if a value is not null JDK offers overloaded method Objects.requireNoneNull(), which was specifically designed for that purpose. But in this case it's not applicable because you need to throw your custom exception (requireNoneNull() operates via NPE, you can only provide a custom message).
The last thing worth to point out before diving into the solution, is that there's nothing wrong with implicit null-checks (if you have quite a bit of them that's an issue, which is rooted in a way your classes and behavior are designed, rather than related to the tools offered by the language).
Therefore, I would advise to implement this functionality using a plain conditional logic:
public static final SimpleDateFormat YEAR_MONTH_DAY = new SimpleDateFormat("yyyy-MM-dd");
public void validateConstraints(RequestType body) {
if (tryParse(body.date()) == null) throw new InvalidCustomException("message");
}
private Date tryParse(String str) {
Date date = null;
try {
if (str != null) date = YEAR_MONTH_DAY.parse(str);
} catch (ParseException e) {
e.printStackTrace();
}
return date;
}
| Optional - Unhandled exception using orElseThrow() | I want to throw a custom exception in case parsing fails, but the compiler complains that there is an unhandled ParserException that comes from the parse() method.
But What am I missing?
My code:
public void validateConstraints(RequestType body) {
SimpleDateFormat simpleDateFormatYearMonth = new SimpleDateFormat("yyyy-MM-dd");
Date date = Optional
.ofNullable(body.date())
.map(simpleDateFormatYearMonth::parse)
.orElseThrow(() -> new InvalidCustomException(""));
}
| [
"TL;DR\n\nYour apparent issue stems from the fact java.util.Function as well as many other functional interfaces doesn't declare to throw any checked Exceptions in its abstract method. For that reason, implementations can't violate the contract providing behavior which is less safe than declared.\n\nHere's a quote from the Java Language Specification §8.4.8.3. Requirements in Overriding and Hiding:\n\nFor every checked exception type listed in the throws clause of m2, that same exception class or one of its supertypes must occur in the erasure (§4.6) of the throws clause of m1; otherwise, a compile-time error occurs.\n\nWith that being said, you can't propagate a checked exception outside the lambda or method reference (since it's not declared, it should be handled right on the spot).\n\nCreating an Optional only for the purpose of hiding null-check, and chaining methods on it is a misuse of Optional since it goes against its design goal.\n\njava.util.Date (and java.sql.Date) as well as SimpleDateFormat are obsolete since Java 8 (reminder: this version was released more than 10 years ago). As a replacement we have a new Time API, represented by classes that reside in java.time package, like Instant, LocalDateTime, DateTimeFormatter, etc.\n\n\nAvoid using Optional to replace Null-checks\nThe design goal of the Optional is to serve as a return type, and its method ofNullable() is supposed to wrap a nullable return value, not to perform validation.\nYou might be interested in reading:\n\nShould Optional.ofNullable() be used for null check?\n\nValid usage of Optional type in Java 8\n\n\nHere's a short quote from the linked above answer by @StuartMarks, Java and OpenJDK developer:\n\nA typical code smell is, instead of the code using method chaining to handle an Optional returned from some method, it creates an Optional from something that's nullable, in order to chain methods and avoid conditionals.\n\nTo validate if a value is not null JDK offers overloaded method Objects.requireNoneNull(), which was specifically designed for that purpose. But in this case it's not applicable because you need to throw your custom exception (requireNoneNull() operates via NPE, you can only provide a custom message).\nThe last thing worth to point out before diving into the solution, is that there's nothing wrong with implicit null-checks (if you have quite a bit of them that's an issue, which is rooted in a way your classes and behavior are designed, rather than related to the tools offered by the language).\nTherefore, I would advise to implement this functionality using a plain conditional logic:\npublic static final SimpleDateFormat YEAR_MONTH_DAY = new SimpleDateFormat(\"yyyy-MM-dd\");\n\npublic void validateConstraints(RequestType body) {\n if (tryParse(body.date()) == null) throw new InvalidCustomException(\"message\");\n}\n\nprivate Date tryParse(String str) {\n Date date = null;\n try {\n if (str != null) date = YEAR_MONTH_DAY.parse(str);\n } catch (ParseException e) {\n e.printStackTrace();\n }\n return date;\n}\n\n"
] | [
1
] | [] | [] | [
"exception",
"java",
"option_type"
] | stackoverflow_0074673944_exception_java_option_type.txt |
Q:
Android 13: move my foreground notification back up?
I have an app which has a foreground task, and posts an ongoing notification.
Earlier until version 12, it was displayed at the topmost place on the notification drawer.
Android 13 changes this, making it appear down below:
As you can see, messenger is preceding my application.
Can I somehow post the ongoing notification to appear at the top?
I'm using it a lot so would be much more comfortable if I can have it on top (where now the messenger is).
Notification is created with basic builder:
Notification.Builder b;
Notification notification = b.setTicker(ticker)
.setSmallIcon(smallicon)
.setContentTitle(title)
.setContentText(text)
.setContentIntent(contentIntent)
.setWhen(0)
.setAutoCancel(false)
.setOngoing(true)
.build();
Can I somehow force it to the first place?
A:
You can try to set priority height in your code. And also avoid setting messenger app superposition on other apps. I think it will help.
| Android 13: move my foreground notification back up? | I have an app which has a foreground task, and posts an ongoing notification.
Earlier until version 12, it was displayed at the topmost place on the notification drawer.
Android 13 changes this, making it appear down below:
As you can see, messenger is preceding my application.
Can I somehow post the ongoing notification to appear at the top?
I'm using it a lot so would be much more comfortable if I can have it on top (where now the messenger is).
Notification is created with basic builder:
Notification.Builder b;
Notification notification = b.setTicker(ticker)
.setSmallIcon(smallicon)
.setContentTitle(title)
.setContentText(text)
.setContentIntent(contentIntent)
.setWhen(0)
.setAutoCancel(false)
.setOngoing(true)
.build();
Can I somehow force it to the first place?
| [
"You can try to set priority height in your code. And also avoid setting messenger app superposition on other apps. I think it will help.\n"
] | [
0
] | [] | [] | [
"android",
"notifications"
] | stackoverflow_0074676013_android_notifications.txt |
Q:
Understanding of degree calcuation in quadrants
I found something in my search, which I don't understand. The goal is to read out the angle of a pointer in a pressure gauge. In my research, I found this example:
https://circuits-ninja.pl/reading-an-indication-from-an-analog-pressure-gauge-using-the-esp32-cam-module-with-an-ov2640-and-opencv-camera/
He's calculating the degree as follows:
# Finding angle using the arc tan of y/x
res = np.arctan(np.divide(float(y_angle), float(x_angle)))
#Converting to degrees
res = np.rad2deg(res)
if x_angle > 0 and y_angle > 0: #in quadrant I
final_angle = 270 - res
if x_angle < 0 and y_angle > 0: #in quadrant II
final_angle = 90 - res
if x_angle < 0 and y_angle < 0: #in quadrant III
final_angle = 90 - res
if x_angle > 0 and y_angle < 0: #in quadrant IV
final_angle = 270 - res
I understand the reason of using quadrants in this case, but what i don't understand is why does he calculate 270 - res
if x_angle and y_angle > 0 and also calculate 270 - res if x_angle > 0 and y_angle < 0.
He's using the same formula for two different quadrants?
Thanks in forward
A:
I think this is because 0 degrees is located, if placed on a two-dimensional surface, on (0,1)=(cos(90),sin(90)) instead of (1,0)=(cos(0),sin(0)). This means it has an offset of 90 degrees.
A:
Much simpler way:
res = np.arctan2(float(y_angle), float(x_angle))
#Converting to degrees
res = np.rad2deg(res)
if (res < 0):
res += 360
That's all, arctan2 will account for all cases including zero x
| Understanding of degree calcuation in quadrants | I found something in my search, which I don't understand. The goal is to read out the angle of a pointer in a pressure gauge. In my research, I found this example:
https://circuits-ninja.pl/reading-an-indication-from-an-analog-pressure-gauge-using-the-esp32-cam-module-with-an-ov2640-and-opencv-camera/
He's calculating the degree as follows:
# Finding angle using the arc tan of y/x
res = np.arctan(np.divide(float(y_angle), float(x_angle)))
#Converting to degrees
res = np.rad2deg(res)
if x_angle > 0 and y_angle > 0: #in quadrant I
final_angle = 270 - res
if x_angle < 0 and y_angle > 0: #in quadrant II
final_angle = 90 - res
if x_angle < 0 and y_angle < 0: #in quadrant III
final_angle = 90 - res
if x_angle > 0 and y_angle < 0: #in quadrant IV
final_angle = 270 - res
I understand the reason of using quadrants in this case, but what i don't understand is why does he calculate 270 - res
if x_angle and y_angle > 0 and also calculate 270 - res if x_angle > 0 and y_angle < 0.
He's using the same formula for two different quadrants?
Thanks in forward
| [
"I think this is because 0 degrees is located, if placed on a two-dimensional surface, on (0,1)=(cos(90),sin(90)) instead of (1,0)=(cos(0),sin(0)). This means it has an offset of 90 degrees.\n",
"Much simpler way:\n res = np.arctan2(float(y_angle), float(x_angle))\n #Converting to degrees\n res = np.rad2deg(res)\n if (res < 0):\n res += 360 \n\nThat's all, arctan2 will account for all cases including zero x\n"
] | [
0,
0
] | [] | [] | [
"math",
"numpy",
"python"
] | stackoverflow_0074674308_math_numpy_python.txt |
Q:
Invalid prettier configuration file detected in VS Code
Booted up my VM running xubuntu in vmware workstation 17 pro. Started working on an exercise in the Odin project in VS Code, beforehand, updated and upgraded via sudo apt-get update and upgrade. Started working and noticed my prettier rules were not formatting on save.
The following error occurs:
["INFO" - 5:58:23 AM] Formatting completed in 6ms.
["INFO" - 5:58:30 AM] Formatting file:///home/t/repos/css-exercises/flex/03-flex-header-2/style.css
["ERROR" - 5:58:30 AM] Invalid prettier configuration file detected.
["ERROR" - 5:58:30 AM] No loader specified for extension ".prettierrc"
Error: No loader specified for extension ".prettierrc"
at Explorer.getLoaderEntryForFile (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8194:17)
at Explorer.loadFileContent (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8448:29)
at Explorer.createCosmiconfigResult (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8453:40)
at runLoad (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8464:37)
at async cacheWrapper (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8294:22)
at async Promise.all (index 0)
at async t.ModuleResolver.getResolvedConfig (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/dist/extension.js:1:5693)
at async t.default.format (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/dist/extension.js:1:13308)
at async t.PrettierEditProvider.provideEdits (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/dist/extension.js:1:11417)
at async B.provideDocumentFormattingEdits (/usr/share/code/resources/app/out/vs/workbench/api/node/extensionHostProcess.js:94:45902)
["ERROR" - 5:58:30 AM] Invalid prettier configuration file detected. See log for details.
Looked in user settings and the formatter was incorrect and then I switched it to prettier code formatter. Still nothing would work. Uninstalled and reinstalled prettier with no change. Tried disabling and reenabling the extension. Tried turning on and off prettier: use editor config, prettier: resolve global modules, prettier: require config. No change.
Currently the file is located in /home/t/repos/ and I also tried copy and pasting into the project directory and adding into the workspace of vs code. Side note, in the /repos folder is also the node_modules directory. The eslintrc.prettierrc and prettier.eslintrc files are correctly named and they remain intact.
What I did to try and work around this was to add a config path directly to the file in the repos directory via settings.JSON. Here is my current settings.JSON file:
{
"workbench.colorTheme": "Default Dark+",
"editor.guides.bracketPairs": true,
"workbench.iconTheme": "vscode-icons",
"editor.linkedEditing": true,
"security.workspace.trust.untrustedFiles": "open",
"prettier.configPath": "/home/t/repos/eslintrc.prettierrc",
"[javascript]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[typescript]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[css]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"gitlens.hovers.currentLine.over": "line",
"liveServer.settings.donotShowInfoMsg": true,
"liveServer.settings.AdvanceCustomBrowserCmdLine": "/opt/firefox/firefox",
"editor.formatOnSave": true,
"prettier.useEditorConfig": false
}
Where did I get these configs from originally?
Directly from this guide: https://vicvijayakumar.com/blog/eslint-airbnb-style-guide-prettier#4-install-the-airbnb-style-config-for-eslint-and-all-dependencies
Side note: The prettier: prettier path to the prettier module is currently blank. Inserting a path to the file did not work as I believe this is node module related?
Does anyone have any recommendations on how to fix this situation, please? I have tried every solution I have ran across. Deeply appreciate any help I can get.
A:
TO START:
Its helpful to know which "settings.json" your configuring. You need to make sure that both your workspace ".vscode/settings.json" file, and your user "settings.json" file (path is contingent on the O.S. your running) are configured to work harmoniously, and that one is not overriding the other with the same configuration twice.
SECONDLY
Remove all configurations you added to your "./settings.json" file for prettier. Those settings were added by the extension author. Despite the esbenp.prettier-vscode being the official prettier extension for VS Code, Prettier was never intended to be configured via VS Code's configuration files. Prettier is very nit-picky about its "./.prettierrc" configuration file. When we use the VS Code config ("settings.json") when attempt to use a prettier config that the extension generates somewhere. If you end up with settings in some project workspace vscode configurations (e.g. ".vscode/settings.json" files) the extension will try to regenerate a file each time one loads a prettier setting. It may even try to load multiple, depending on the scope of your settings.json file. Some how it has to handle that the user-scoped settings.json file should always be overriden by a workspace "settings.json" configuration file. That's not to mention that prettier configs often contain there own overridden rule sets within the ".prettierrc" configuration file.
Note: Just FYI, the most problematic configuration your using is the "prettier.configPath" setting.
_I'm going to stop going down the rabbit hole, hopefully you get the point I am making, which is: Don't use VS Code settings.json configuration files to configure "Prettier".
This will be more easy to explain with a bullet-list
The following will help you configure a clean environment, one where Prettier will work as you have configure it to work.
To start...
...delete all Prettier settings that you added to all settings.json files. This includes any Prettier settings you added to project ".vscode/settings.json" files, and it especially includes all Prettier-settings that you added to your user "settings.json" file. After you finish, reload VS Code, by closing it out completely, and reopening it.
Rather than delete all prettier configuration files from any projects you have open, I am going to instead ask that when you reopen VS Code, that you only open one instance of VS Code. If VS Code opens a project (aka project-folder) after restarting, you're going to want to close that project w/o opening another one. To do that you can...
Use the keybinding ALT + K followed by the F key.
Alternatively you can use the title-bar menu like so: FILE >> CLOSE FOLDER
Additionally, make sure all tabs are closed as well.
At this point your instance of VS Code should be totally empty, completely a blank canvas. From here you are going to want to create a new file. To do this...
You have one of two options
(A) You can use the keybinding CTRL + ALT + SUPER + N
(B) Another way to achieve the same thing is to use the title-bar menu like so: FILE >> NEW FILE
Once you've prompted VS Code to create a new file VS Code will want you to pick a location where it's to be created at. The location doesn't matter, so long as it is in a completely empty file, with nothing else in it. To name the file, VS Code will probably use the drop-down that is often refereed to as the quick input menu. The file needs to be a JavaScript file, as a consequence, the file must end with the file extension ".js". So I can reference the file later, I will call mine "main.js", but you can call your whatever you want, so long as you know which file I am referencing when you read "main.js".
In the same folder as "main.js", create one more new file without a file extension. This file MUST HAVE THE NAME...
.prettierrc
NOTE: "The file has a period (or dot) as the first character in its name (this makes it a hidden file)."
Add the following prettier configuration to the ".prettierrc" file you just created.
{
"trailingComma": "es5",
"tabWidth": 4,
"semi": true,
"singleQuote": true
}
**Execute the following commands"
$ npm init
The command will ask a bunch of questions, just press enter for each one to quickly configure the environment with the default npm/Node.js configuration.
The purpose of this is simply to create a valid "package.json" file.
$ sudo npm i -g prettier && npm i -D prettier
// Or you can execute it as two commands, like this:
$ sudo npm i -g prettier
$ npm i -D prettier
The command (or commands, depending on how you enter them) install prettier as a project dependency, and as a global Node.js package.
NOTE: "Make sure that you have prettier installed as a vscode extension. And make sure that you have only one prettier extension. Having multiple can create problems and confusion. The one you should have should have the Extension ID: esbenp.prettier-vscode "
Prettier Should work now. Use the main.js file we created early to write some javascript, then press F1 to open the quick input, type the word "format document", until you see the option "Format Document", which you want to click. Then choose prettier from the menu. Prettier won't format if you have erroneous code, it needs to be free from error. (if you want to fix errors use a linter like ESLint).
You can add a bunch of blank lines, or put braces on the wrong line, leave out semi colons, and prettier should format all of those mistakes.
| Invalid prettier configuration file detected in VS Code | Booted up my VM running xubuntu in vmware workstation 17 pro. Started working on an exercise in the Odin project in VS Code, beforehand, updated and upgraded via sudo apt-get update and upgrade. Started working and noticed my prettier rules were not formatting on save.
The following error occurs:
["INFO" - 5:58:23 AM] Formatting completed in 6ms.
["INFO" - 5:58:30 AM] Formatting file:///home/t/repos/css-exercises/flex/03-flex-header-2/style.css
["ERROR" - 5:58:30 AM] Invalid prettier configuration file detected.
["ERROR" - 5:58:30 AM] No loader specified for extension ".prettierrc"
Error: No loader specified for extension ".prettierrc"
at Explorer.getLoaderEntryForFile (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8194:17)
at Explorer.loadFileContent (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8448:29)
at Explorer.createCosmiconfigResult (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8453:40)
at runLoad (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8464:37)
at async cacheWrapper (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/node_modules/prettier/third-party.js:8294:22)
at async Promise.all (index 0)
at async t.ModuleResolver.getResolvedConfig (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/dist/extension.js:1:5693)
at async t.default.format (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/dist/extension.js:1:13308)
at async t.PrettierEditProvider.provideEdits (/home/t/.vscode/extensions/esbenp.prettier-vscode-9.10.3/dist/extension.js:1:11417)
at async B.provideDocumentFormattingEdits (/usr/share/code/resources/app/out/vs/workbench/api/node/extensionHostProcess.js:94:45902)
["ERROR" - 5:58:30 AM] Invalid prettier configuration file detected. See log for details.
Looked in user settings and the formatter was incorrect and then I switched it to prettier code formatter. Still nothing would work. Uninstalled and reinstalled prettier with no change. Tried disabling and reenabling the extension. Tried turning on and off prettier: use editor config, prettier: resolve global modules, prettier: require config. No change.
Currently the file is located in /home/t/repos/ and I also tried copy and pasting into the project directory and adding into the workspace of vs code. Side note, in the /repos folder is also the node_modules directory. The eslintrc.prettierrc and prettier.eslintrc files are correctly named and they remain intact.
What I did to try and work around this was to add a config path directly to the file in the repos directory via settings.JSON. Here is my current settings.JSON file:
{
"workbench.colorTheme": "Default Dark+",
"editor.guides.bracketPairs": true,
"workbench.iconTheme": "vscode-icons",
"editor.linkedEditing": true,
"security.workspace.trust.untrustedFiles": "open",
"prettier.configPath": "/home/t/repos/eslintrc.prettierrc",
"[javascript]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[typescript]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[css]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"gitlens.hovers.currentLine.over": "line",
"liveServer.settings.donotShowInfoMsg": true,
"liveServer.settings.AdvanceCustomBrowserCmdLine": "/opt/firefox/firefox",
"editor.formatOnSave": true,
"prettier.useEditorConfig": false
}
Where did I get these configs from originally?
Directly from this guide: https://vicvijayakumar.com/blog/eslint-airbnb-style-guide-prettier#4-install-the-airbnb-style-config-for-eslint-and-all-dependencies
Side note: The prettier: prettier path to the prettier module is currently blank. Inserting a path to the file did not work as I believe this is node module related?
Does anyone have any recommendations on how to fix this situation, please? I have tried every solution I have ran across. Deeply appreciate any help I can get.
| [
"TO START:\nIts helpful to know which \"settings.json\" your configuring. You need to make sure that both your workspace \".vscode/settings.json\" file, and your user \"settings.json\" file (path is contingent on the O.S. your running) are configured to work harmoniously, and that one is not overriding the other with the same configuration twice.\nSECONDLY\nRemove all configurations you added to your \"./settings.json\" file for prettier. Those settings were added by the extension author. Despite the esbenp.prettier-vscode being the official prettier extension for VS Code, Prettier was never intended to be configured via VS Code's configuration files. Prettier is very nit-picky about its \"./.prettierrc\" configuration file. When we use the VS Code config (\"settings.json\") when attempt to use a prettier config that the extension generates somewhere. If you end up with settings in some project workspace vscode configurations (e.g. \".vscode/settings.json\" files) the extension will try to regenerate a file each time one loads a prettier setting. It may even try to load multiple, depending on the scope of your settings.json file. Some how it has to handle that the user-scoped settings.json file should always be overriden by a workspace \"settings.json\" configuration file. That's not to mention that prettier configs often contain there own overridden rule sets within the \".prettierrc\" configuration file.\n\nNote: Just FYI, the most problematic configuration your using is the \"prettier.configPath\" setting.\n\n_I'm going to stop going down the rabbit hole, hopefully you get the point I am making, which is: Don't use VS Code settings.json configuration files to configure \"Prettier\".\n\nThis will be more easy to explain with a bullet-list\nThe following will help you configure a clean environment, one where Prettier will work as you have configure it to work.\nTo start...\n\n...delete all Prettier settings that you added to all settings.json files. This includes any Prettier settings you added to project \".vscode/settings.json\" files, and it especially includes all Prettier-settings that you added to your user \"settings.json\" file. After you finish, reload VS Code, by closing it out completely, and reopening it.\n\n\n\n\nRather than delete all prettier configuration files from any projects you have open, I am going to instead ask that when you reopen VS Code, that you only open one instance of VS Code. If VS Code opens a project (aka project-folder) after restarting, you're going to want to close that project w/o opening another one. To do that you can...\n\nUse the keybinding ALT + K followed by the F key.\nAlternatively you can use the title-bar menu like so: FILE >> CLOSE FOLDER\n\n\n\nAdditionally, make sure all tabs are closed as well.\n\n\n\nAt this point your instance of VS Code should be totally empty, completely a blank canvas. From here you are going to want to create a new file. To do this...\n\nYou have one of two options\n\n(A) You can use the keybinding CTRL + ALT + SUPER + N\n(B) Another way to achieve the same thing is to use the title-bar menu like so: FILE >> NEW FILE\n\n\nOnce you've prompted VS Code to create a new file VS Code will want you to pick a location where it's to be created at. The location doesn't matter, so long as it is in a completely empty file, with nothing else in it. To name the file, VS Code will probably use the drop-down that is often refereed to as the quick input menu. The file needs to be a JavaScript file, as a consequence, the file must end with the file extension \".js\". So I can reference the file later, I will call mine \"main.js\", but you can call your whatever you want, so long as you know which file I am referencing when you read \"main.js\".\n\nIn the same folder as \"main.js\", create one more new file without a file extension. This file MUST HAVE THE NAME...\n\n.prettierrc\n\n\n\n\nNOTE: \"The file has a period (or dot) as the first character in its name (this makes it a hidden file).\"\n\n\n\nAdd the following prettier configuration to the \".prettierrc\" file you just created.\n\n{\n \"trailingComma\": \"es5\",\n \"tabWidth\": 4,\n \"semi\": true,\n \"singleQuote\": true\n}\n\n\n\n\n**Execute the following commands\"\n\n\n $ npm init\n\n\nThe command will ask a bunch of questions, just press enter for each one to quickly configure the environment with the default npm/Node.js configuration.\nThe purpose of this is simply to create a valid \"package.json\" file.\n\n\n\n $ sudo npm i -g prettier && npm i -D prettier\n\n // Or you can execute it as two commands, like this:\n\n $ sudo npm i -g prettier\n $ npm i -D prettier\n\n\nThe command (or commands, depending on how you enter them) install prettier as a project dependency, and as a global Node.js package.\n\n\n\n\n\nNOTE: \"Make sure that you have prettier installed as a vscode extension. And make sure that you have only one prettier extension. Having multiple can create problems and confusion. The one you should have should have the Extension ID: esbenp.prettier-vscode \"\n\n\n\nPrettier Should work now. Use the main.js file we created early to write some javascript, then press F1 to open the quick input, type the word \"format document\", until you see the option \"Format Document\", which you want to click. Then choose prettier from the menu. Prettier won't format if you have erroneous code, it needs to be free from error. (if you want to fix errors use a linter like ESLint).\n\nYou can add a bunch of blank lines, or put braces on the wrong line, leave out semi colons, and prettier should format all of those mistakes.\n\n\n\n"
] | [
0
] | [] | [] | [
"css",
"javascript",
"prettier",
"visual_studio_code"
] | stackoverflow_0074675162_css_javascript_prettier_visual_studio_code.txt |
Q:
How to find and fix a Rails and Couchbase memory leak
I have the following test code:
def loop_bucket_gets
bucket = Couchbase::Bucket.new({:node_list => ['xxx.xxx.xxx.xxx:8091', 'yyy.yyy.yyy.yyy:8091'],
:bucket => 'Foo',
:pool => 'default',
:expires_in => 1.day,
:default_format => :marshal,
:key_prefix => '_foo'
})
i = 0
loop do
begin
i += 1
bucket.get "ABC#{i}"
rescue ::Couchbase::Error::Base => e
nil
end
end
end
When I execute this in the Rails console the memory leaks.
I'm using:
couchbase 1.3.10 gem
libcouchbase 2.4.3
I created an issue at https://www.couchbase.com/issues/browse/RCBC-187
A:
Here are some possible causes of the memory leak in your code:
The bucket variable is not being garbage collected because it is in the global scope. You can fix this by moving the declaration of the bucket variable inside the loop_bucket_gets method.
The bucket variable is being referenced by the loop block, which is preventing it from being garbage collected. You can fix this by using a block variable to hold a reference to the bucket object inside the loop block.
The bucket.get method is not releasing the memory allocated for the returned value. You can fix this by explicitly setting the returned value to nil after it is used.
Here is an updated version of the loop_bucket_gets method that addresses these issues:
def loop_bucket_gets
i = 0
loop do
begin
# Create the Couchbase bucket object inside the loop block
bucket = Couchbase::Bucket.new({:node_list => ['xxx.xxx.xxx.xxx:8091', 'yyy.yyy.yyy.yyy:8091'],
:bucket => 'Foo',
:pool => 'default',
:expires_in => 1.day,
:default_format => :marshal,
:key_prefix => '_foo'
})
i += 1
# Use a block variable to hold a reference to the bucket object
result = bucket.get "ABC#{i}"
# Explicitly set the result to nil after it is used
result = nil
rescue ::Couchbase::Error::Base => e
# Set the result to nil if an error occurred
result = nil
end
end
end
You may also want to consider using the ObjectSpace.garbage_collect method to manually trigger garbage collection after each iteration of the loop block. This can help to reduce memory usage and prevent the memory leak from occurring.
def loop_bucket_gets
i = 0
loop do
begin
bucket = Couchbase::Bucket
A:
To fix the memory leak, you need to determine what is causing it. One possible cause is that the objects returned by the bucket.get method are not being garbage collected. You can fix this by setting the objects to nil after you are done with them, or by using the ObjectSpace.garbage_collect method to explicitly trigger garbage collection.
Another potential cause of the memory leak is that the bucket object itself is not being garbage collected. You can fix this by ensuring that the bucket object goes out of scope when you are done with it, or by explicitly calling the bucket.close method to close the connection to the Couchbase server.
To debug the memory leak further, you can use tools like the ObjectSpace.count_objects method and the GC.stat method to track the number of objects and the amount of memory being used by your application. You can also use the GC.start method to trigger garbage collection manually and see if it helps to reduce the memory usage of your application.
A:
Here is an example of how you could modify your code to avoid memory leaks:
def loop_bucket_gets
bucket = Couchbase::Bucket.new({:node_list => ['xxx.xxx.xxx.xxx:8091', 'yyy.yyy.yyy.yyy:8091'],
:bucket => 'Foo',
:pool => 'default',
:expires_in => 1.day,
:default_format => :marshal,
:key_prefix => '_foo'
})
# Create a queue to store the keys that we need to retrieve from the bucket
queue = Queue.new
# Add all the keys that we want to retrieve from the bucket to the queue
(1..1000).each { |i| queue << "ABC#{i}" }
# Use a thread pool to retrieve the keys from the bucket in parallel
# This will help reduce the memory usage and avoid memory leaks
thread_pool = Concurrent::FixedThreadPool.new(10)
loop do
# Check if the queue is empty
break if queue.empty?
# Retrieve a key from the queue
key = queue.pop
# Retrieve the value for the key from the bucket in a separate thread
thread_pool.post do
begin
value = bucket.get(key)
rescue ::Couchbase::Error::Base => e
# Handle the error as appropriate
nil
end
end
end
# Shut down the thread pool when we're done
thread_pool.shutdown
thread_pool.wait_for_termination
end
This code will retrieve the values for the keys ABC1, ABC2, ..., ABC1000 from the Couchbase bucket in parallel using a thread pool, which should help reduce the memory usage and avoid memory leaks.
| How to find and fix a Rails and Couchbase memory leak | I have the following test code:
def loop_bucket_gets
bucket = Couchbase::Bucket.new({:node_list => ['xxx.xxx.xxx.xxx:8091', 'yyy.yyy.yyy.yyy:8091'],
:bucket => 'Foo',
:pool => 'default',
:expires_in => 1.day,
:default_format => :marshal,
:key_prefix => '_foo'
})
i = 0
loop do
begin
i += 1
bucket.get "ABC#{i}"
rescue ::Couchbase::Error::Base => e
nil
end
end
end
When I execute this in the Rails console the memory leaks.
I'm using:
couchbase 1.3.10 gem
libcouchbase 2.4.3
I created an issue at https://www.couchbase.com/issues/browse/RCBC-187
| [
"Here are some possible causes of the memory leak in your code:\nThe bucket variable is not being garbage collected because it is in the global scope. You can fix this by moving the declaration of the bucket variable inside the loop_bucket_gets method.\nThe bucket variable is being referenced by the loop block, which is preventing it from being garbage collected. You can fix this by using a block variable to hold a reference to the bucket object inside the loop block.\nThe bucket.get method is not releasing the memory allocated for the returned value. You can fix this by explicitly setting the returned value to nil after it is used.\nHere is an updated version of the loop_bucket_gets method that addresses these issues:\ndef loop_bucket_gets\n i = 0\n loop do\n begin\n # Create the Couchbase bucket object inside the loop block\n bucket = Couchbase::Bucket.new({:node_list => ['xxx.xxx.xxx.xxx:8091', 'yyy.yyy.yyy.yyy:8091'],\n :bucket => 'Foo',\n :pool => 'default',\n :expires_in => 1.day,\n :default_format => :marshal,\n :key_prefix => '_foo'\n })\n\n i += 1\n # Use a block variable to hold a reference to the bucket object\n result = bucket.get \"ABC#{i}\"\n\n # Explicitly set the result to nil after it is used\n result = nil\n rescue ::Couchbase::Error::Base => e\n # Set the result to nil if an error occurred\n result = nil\n end\n end\nend\n\nYou may also want to consider using the ObjectSpace.garbage_collect method to manually trigger garbage collection after each iteration of the loop block. This can help to reduce memory usage and prevent the memory leak from occurring.\ndef loop_bucket_gets\n i = 0\n loop do\n begin\n bucket = Couchbase::Bucket\n\n",
"To fix the memory leak, you need to determine what is causing it. One possible cause is that the objects returned by the bucket.get method are not being garbage collected. You can fix this by setting the objects to nil after you are done with them, or by using the ObjectSpace.garbage_collect method to explicitly trigger garbage collection.\nAnother potential cause of the memory leak is that the bucket object itself is not being garbage collected. You can fix this by ensuring that the bucket object goes out of scope when you are done with it, or by explicitly calling the bucket.close method to close the connection to the Couchbase server.\nTo debug the memory leak further, you can use tools like the ObjectSpace.count_objects method and the GC.stat method to track the number of objects and the amount of memory being used by your application. You can also use the GC.start method to trigger garbage collection manually and see if it helps to reduce the memory usage of your application.\n",
"Here is an example of how you could modify your code to avoid memory leaks:\ndef loop_bucket_gets\n bucket = Couchbase::Bucket.new({:node_list => ['xxx.xxx.xxx.xxx:8091', 'yyy.yyy.yyy.yyy:8091'],\n :bucket => 'Foo',\n :pool => 'default',\n :expires_in => 1.day,\n :default_format => :marshal,\n :key_prefix => '_foo'\n })\n\n # Create a queue to store the keys that we need to retrieve from the bucket\n queue = Queue.new\n\n # Add all the keys that we want to retrieve from the bucket to the queue\n (1..1000).each { |i| queue << \"ABC#{i}\" }\n\n # Use a thread pool to retrieve the keys from the bucket in parallel\n # This will help reduce the memory usage and avoid memory leaks\n thread_pool = Concurrent::FixedThreadPool.new(10)\n\n loop do\n # Check if the queue is empty\n break if queue.empty?\n\n # Retrieve a key from the queue\n key = queue.pop\n\n # Retrieve the value for the key from the bucket in a separate thread\n thread_pool.post do\n begin\n value = bucket.get(key)\n rescue ::Couchbase::Error::Base => e\n # Handle the error as appropriate\n nil\n end\n end\n end\n\n # Shut down the thread pool when we're done\n thread_pool.shutdown\n thread_pool.wait_for_termination\nend\n\nThis code will retrieve the values for the keys ABC1, ABC2, ..., ABC1000 from the Couchbase bucket in parallel using a thread pool, which should help reduce the memory usage and avoid memory leaks.\n"
] | [
0,
0,
0
] | [
"This loop will go for infinite time. You should pass a breaking condition.\n"
] | [
-9
] | [
"caching",
"couchbase",
"ruby",
"ruby_on_rails"
] | stackoverflow_0026712241_caching_couchbase_ruby_ruby_on_rails.txt |
Q:
Type check an Any variable for Data Class
I have a class that has a constructor of type Any. I'm passing an instance of a Data Class to that constructor. How can I type check the Any variable to make sure it contains a Data Class?
What I tried so far:
private var myObject : Any
fun dataClassTypeCheck(): Boolean {
if (myObject is KClass<*>) {return true}
return false
}
A:
If you want to know if myObject has a type which is a data class then it's:
myObject::class.isData.
If you want to know if myObject is a KClass object of a data class then it's: myObject.isData
A:
if you have Class<?>:
MyObjectClass::class.java.kotlin.isData
and if you have instance of class:
myObject.javaCalass.kotlin.isData
| Type check an Any variable for Data Class | I have a class that has a constructor of type Any. I'm passing an instance of a Data Class to that constructor. How can I type check the Any variable to make sure it contains a Data Class?
What I tried so far:
private var myObject : Any
fun dataClassTypeCheck(): Boolean {
if (myObject is KClass<*>) {return true}
return false
}
| [
"If you want to know if myObject has a type which is a data class then it's:\nmyObject::class.isData.\nIf you want to know if myObject is a KClass object of a data class then it's: myObject.isData\n",
"if you have Class<?>:\nMyObjectClass::class.java.kotlin.isData\n\nand if you have instance of class:\nmyObject.javaCalass.kotlin.isData\n\n"
] | [
2,
0
] | [] | [] | [
"kotlin"
] | stackoverflow_0059751170_kotlin.txt |
Q:
REGEX that matches a paragraph that doesn't contain a word avoiding \n
I have a REGEX that finds a word inside a paragraph, while avoiding \n.
(?i)(?<=\b|\\n)cat\b
Something something \ncat\n - match
Something something cat - match
Something something cats - no match, as expected
I want the negative of this REGEX - does a paragraph not contain the word.
Something something \ncat\n - no match - contains the word
Something something cat - no match - contains the word
Something something cats - match
Something something \ncatss\n - match
I've tried a Negative lookbehind but that doesn't seem to work
A:
To negate a regex, you can use a negative lookahead assertion. This will match any character that is not followed by the specified pattern. For example, to match a paragraph that does not contain the word "cat", you could use the following regex:
^(?!.*\bcat\b).*$
This regex uses a negative lookahead assertion ((?!...)) to match any character that is not followed by a word boundary (\b), the word "cat", and another word boundary. The ^ and $ anchors are used to match the start and end of the paragraph, respectively.
Here's an example of how you could use this regex in your code:
val regex = Regex("^(?!.*\\bcat\\b).*$")
val paragraph = "Something something cat"
if (regex.matches(paragraph)) {
// paragraph does not contain the word "cat"
}
This code will match the paragraph "Something something cat" because it does not contain the word "cat". It will not match the paragraph "Something something \ncat\n" because it does contain the word "cat".
| REGEX that matches a paragraph that doesn't contain a word avoiding \n | I have a REGEX that finds a word inside a paragraph, while avoiding \n.
(?i)(?<=\b|\\n)cat\b
Something something \ncat\n - match
Something something cat - match
Something something cats - no match, as expected
I want the negative of this REGEX - does a paragraph not contain the word.
Something something \ncat\n - no match - contains the word
Something something cat - no match - contains the word
Something something cats - match
Something something \ncatss\n - match
I've tried a Negative lookbehind but that doesn't seem to work
| [
"To negate a regex, you can use a negative lookahead assertion. This will match any character that is not followed by the specified pattern. For example, to match a paragraph that does not contain the word \"cat\", you could use the following regex:\n^(?!.*\\bcat\\b).*$\n\nThis regex uses a negative lookahead assertion ((?!...)) to match any character that is not followed by a word boundary (\\b), the word \"cat\", and another word boundary. The ^ and $ anchors are used to match the start and end of the paragraph, respectively.\nHere's an example of how you could use this regex in your code:\nval regex = Regex(\"^(?!.*\\\\bcat\\\\b).*$\")\nval paragraph = \"Something something cat\"\n\nif (regex.matches(paragraph)) {\n // paragraph does not contain the word \"cat\"\n}\n\nThis code will match the paragraph \"Something something cat\" because it does not contain the word \"cat\". It will not match the paragraph \"Something something \\ncat\\n\" because it does contain the word \"cat\".\n"
] | [
0
] | [] | [] | [
"regex",
"regex_negation",
"scala"
] | stackoverflow_0074675949_regex_regex_negation_scala.txt |
Q:
Exception starting filter [springSecurityFilterChain] java.lang.ClassNotFoundException: org.springframework.web.filter.DelegatingFilterProxy
What could be the reason for one app in Tomcat not to start?
I kindly ask not to downvote, as in my case I had no clues on the problem and it impossible to know the origins of it. If because of your downvotes the problem report would be deleted, no one would know my solution about it, which would be helpful for others.
catalina.2022-12-04.log
04-Dec-2022 18:52:38.249 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/opt/tomcat_shared/webapps/web-api.war]
04-Dec-2022 18:52:38.796 INFO [main] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
04-Dec-2022 18:52:38.797 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal One or more Filters failed to start. Full details will be found in the appropriate container log file
04-Dec-2022 18:52:38.797 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal Context [/web-api] startup failed due to previous errors
04-Dec-2022 18:52:38.819 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/opt/tomcat_shared/webapps/web-api.war] has finished in [569] ms
04-Dec-2022 18:52:38.822 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
04-Dec-2022 18:52:38.834 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-8443"]
04-Dec-2022 18:52:38.843 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
04-Dec-2022 18:52:38.846 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [45,106] milliseconds
localhost.2022-12-04.log
04-Dec-2022 18:51:50.509 INFO [Thread-24] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
04-Dec-2022 18:52:25.810 INFO [main] org.apache.catalina.core.ApplicationContext.log 2 Spring WebApplicationInitializers detected on classpath
04-Dec-2022 18:52:29.400 INFO [main] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
04-Dec-2022 18:52:38.797 SEVERE [main] org.apache.catalina.core.StandardContext.filterStart Exception starting filter [springSecurityFilterChain]
java.lang.ClassNotFoundException: org.springframework.web.filter.DelegatingFilterProxy
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1363)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
at org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:540)
at org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:521)
at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:150)
at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:249)
at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:102)
at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4516)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5162)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:713)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:695)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:978)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1850)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:773)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:427)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1577)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:424)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:367)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:929)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:831)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1377)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1367)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:902)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:423)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:928)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:638)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:350)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:492)
A:
The reason was probably the low level files corruption after they were extracted from the web archive inside the tomcat folder to their own web application folder.
So I have deleted /opt/tomcat/webapps/web-api folder (web-api is the name of the app) and have restarted the tomcat and everything has been restored well. So, the web-archive has been extracted again with no losses.
| Exception starting filter [springSecurityFilterChain] java.lang.ClassNotFoundException: org.springframework.web.filter.DelegatingFilterProxy | What could be the reason for one app in Tomcat not to start?
I kindly ask not to downvote, as in my case I had no clues on the problem and it impossible to know the origins of it. If because of your downvotes the problem report would be deleted, no one would know my solution about it, which would be helpful for others.
catalina.2022-12-04.log
04-Dec-2022 18:52:38.249 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/opt/tomcat_shared/webapps/web-api.war]
04-Dec-2022 18:52:38.796 INFO [main] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
04-Dec-2022 18:52:38.797 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal One or more Filters failed to start. Full details will be found in the appropriate container log file
04-Dec-2022 18:52:38.797 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal Context [/web-api] startup failed due to previous errors
04-Dec-2022 18:52:38.819 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/opt/tomcat_shared/webapps/web-api.war] has finished in [569] ms
04-Dec-2022 18:52:38.822 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
04-Dec-2022 18:52:38.834 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-8443"]
04-Dec-2022 18:52:38.843 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
04-Dec-2022 18:52:38.846 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [45,106] milliseconds
localhost.2022-12-04.log
04-Dec-2022 18:51:50.509 INFO [Thread-24] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
04-Dec-2022 18:52:25.810 INFO [main] org.apache.catalina.core.ApplicationContext.log 2 Spring WebApplicationInitializers detected on classpath
04-Dec-2022 18:52:29.400 INFO [main] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
04-Dec-2022 18:52:38.797 SEVERE [main] org.apache.catalina.core.StandardContext.filterStart Exception starting filter [springSecurityFilterChain]
java.lang.ClassNotFoundException: org.springframework.web.filter.DelegatingFilterProxy
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1363)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
at org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:540)
at org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:521)
at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:150)
at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:249)
at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:102)
at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4516)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5162)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:713)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:695)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:978)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1850)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:773)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:427)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1577)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:424)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:367)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:929)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:831)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1377)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1367)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:902)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:423)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:928)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:638)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:350)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:492)
| [
"The reason was probably the low level files corruption after they were extracted from the web archive inside the tomcat folder to their own web application folder.\nSo I have deleted /opt/tomcat/webapps/web-api folder (web-api is the name of the app) and have restarted the tomcat and everything has been restored well. So, the web-archive has been extracted again with no losses.\n"
] | [
0
] | [] | [] | [
"java",
"spring",
"tomcat"
] | stackoverflow_0074676000_java_spring_tomcat.txt |
Q:
Android Studio Error "Minimum supported Gradle version is 7.0.2. Current version is 6.8."
An error occurred after I downloaded version 6.8 and the latest version of Gradle.
A problem occurred evaluating project ':launcher'.
< Failed to apply plugin 'com.android.internal.version-check'.
<< Minimum supported Gradle version is 7.0.2. Current version is 6.8. If using the gradle wrapper, try editing the distributionUrl...
What do I have to do?
I'm attaching more details in the added pictures.
A:
The error:
Minimum supported Gradle version is 7.0.2. Current version is 6.8.
Likely means:
Your "\gradle"-folder is missing from your project folder:
(Note: Not to be mistaken for the ".gradle"-folder which is a different folder)
Solution:
Get a copy of the "\gradle"-folder from another working project (or create a new project).
Or:
Your "\gradle\wrapper\gradle-wrapper.properties" has an incorrect value in the "distributionUrl=":
Solution:
Change the value in "\gradle\wrapper\gradle-wrapper.properties" to
distributionUrl=https\://services.gradle.org/distributions/gradle-7.0.2-bin.zip
I hade the same problem after moving my project to another computer:
Minimum supported Gradle version is 7.0.2. Current version is 6.8.
Please fix the project's Gradle settings.
Gradle Settings.
Clicking on the "Gradle Settings"-link opened the Gradle settings Window, but the Gradle JDK was already correctly set to version 11:
So my next step was to check the Project Structure:
And update the Gradle Version to 7.0.2:
But that led to this error instead:
¤ What went wrong:
An exception occurred applying plugin request [id: 'com.android.application']
» Failed to apply plugin 'com.android.internal.version-check'.
» Minimum supported Gradle version is 7.0.2. Current version is 6.8. If using the gradle wrapper, try editing the distributionUrl in D:\Files\Code-Project\gradle\wrapper\gradle-wrapper.properties to gradle-7.0.2-all.zip
Now the error points me towards a problem within the "\gradle\wrapper\gradle-wrapper.properties"-file.
Looking into that I found out the real problem - the "\gradle"-folder was completely missing.
(Note: The "\.gradle"-folder is not the same as the "\gradle"-folder)
Copying the "\gradle"-folder from another project solved my problems.
Now my "\gradle\wrapper\gradle-wrapper.properties"-file looks like this:
A:
Next update the Gradle Version in Android Studio select invalidate cache and restart solve the issue.
Go to Menu File->Invalidate cache..-> Invalidate and Restart.
A:
Go to gradle wrapper properties
In the distribution url line change gradle version from 6.5 to 7.0.2 or the current version when you see this
A:
The error says that the gradle version in your system is less than your project's gradle verion. That's the reason why it is unable to compile your project.
System gradle version < Project's gradle version
So there are 2 solutions here,
Change the distribution URL in the gradle-wrapper.properties file in the android/gradle/wrapper directory to distributionUrl=https\://services.gradle.org/distributions/gradle-7.0.2-bin.zip.
Update your gradle plugin. If you use some environment variable, download the latest verion from Gradle | Manual Installation and replace it with your current one. Don't forget to update the path variable in system properties.
| Android Studio Error "Minimum supported Gradle version is 7.0.2. Current version is 6.8." | An error occurred after I downloaded version 6.8 and the latest version of Gradle.
A problem occurred evaluating project ':launcher'.
< Failed to apply plugin 'com.android.internal.version-check'.
<< Minimum supported Gradle version is 7.0.2. Current version is 6.8. If using the gradle wrapper, try editing the distributionUrl...
What do I have to do?
I'm attaching more details in the added pictures.
| [
"The error:\n\nMinimum supported Gradle version is 7.0.2. Current version is 6.8.\n\nLikely means:\n\nYour \"\\gradle\"-folder is missing from your project folder:\n(Note: Not to be mistaken for the \".gradle\"-folder which is a different folder)\nSolution:\nGet a copy of the \"\\gradle\"-folder from another working project (or create a new project).\n\n\nOr:\n\nYour \"\\gradle\\wrapper\\gradle-wrapper.properties\" has an incorrect value in the \"distributionUrl=\":\nSolution:\nChange the value in \"\\gradle\\wrapper\\gradle-wrapper.properties\" to\ndistributionUrl=https\\://services.gradle.org/distributions/gradle-7.0.2-bin.zip\n\n\n\nI hade the same problem after moving my project to another computer:\n\nMinimum supported Gradle version is 7.0.2. Current version is 6.8.\nPlease fix the project's Gradle settings.\nGradle Settings.\n\nClicking on the \"Gradle Settings\"-link opened the Gradle settings Window, but the Gradle JDK was already correctly set to version 11:\n\nSo my next step was to check the Project Structure:\n\nAnd update the Gradle Version to 7.0.2:\n\nBut that led to this error instead:\n\n\n¤ What went wrong:\nAn exception occurred applying plugin request [id: 'com.android.application']\n» Failed to apply plugin 'com.android.internal.version-check'.\n» Minimum supported Gradle version is 7.0.2. Current version is 6.8. If using the gradle wrapper, try editing the distributionUrl in D:\\Files\\Code-Project\\gradle\\wrapper\\gradle-wrapper.properties to gradle-7.0.2-all.zip\n\nNow the error points me towards a problem within the \"\\gradle\\wrapper\\gradle-wrapper.properties\"-file.\nLooking into that I found out the real problem - the \"\\gradle\"-folder was completely missing.\n(Note: The \"\\.gradle\"-folder is not the same as the \"\\gradle\"-folder)\nCopying the \"\\gradle\"-folder from another project solved my problems.\nNow my \"\\gradle\\wrapper\\gradle-wrapper.properties\"-file looks like this:\n\n",
"Next update the Gradle Version in Android Studio select invalidate cache and restart solve the issue.\nGo to Menu File->Invalidate cache..-> Invalidate and Restart.\n",
"\nGo to gradle wrapper properties\n\nIn the distribution url line change gradle version from 6.5 to 7.0.2 or the current version when you see this\n",
"The error says that the gradle version in your system is less than your project's gradle verion. That's the reason why it is unable to compile your project.\n\nSystem gradle version < Project's gradle version\n\nSo there are 2 solutions here,\n\nChange the distribution URL in the gradle-wrapper.properties file in the android/gradle/wrapper directory to distributionUrl=https\\://services.gradle.org/distributions/gradle-7.0.2-bin.zip.\nUpdate your gradle plugin. If you use some environment variable, download the latest verion from Gradle | Manual Installation and replace it with your current one. Don't forget to update the path variable in system properties.\n\n"
] | [
16,
3,
2,
0
] | [] | [] | [
"android",
"android_gradle_plugin",
"android_studio",
"build.gradle",
"gradle"
] | stackoverflow_0069240429_android_android_gradle_plugin_android_studio_build.gradle_gradle.txt |
Q:
Remove unwanted characters from starting position
Need to remove only characters at the end.
GFG2014JP34343
D2013GH43422
HHH2014JP34343
CC2013GH43422
Output:
2014JP34343
2013GH43422
2014JP34343
2013GH43422
Tried REGEXP with different pattern
A:
We can use a regex replacement here:
SELECT val, REGEXP_REPLACE(val, '^[^[:digit:]]+', '') AS val_out
FROM yourTable;
| Remove unwanted characters from starting position | Need to remove only characters at the end.
GFG2014JP34343
D2013GH43422
HHH2014JP34343
CC2013GH43422
Output:
2014JP34343
2013GH43422
2014JP34343
2013GH43422
Tried REGEXP with different pattern
| [
"We can use a regex replacement here:\nSELECT val, REGEXP_REPLACE(val, '^[^[:digit:]]+', '') AS val_out\nFROM yourTable;\n\n"
] | [
0
] | [] | [] | [
"oracle"
] | stackoverflow_0074676004_oracle.txt |
Q:
requirements.txt pytorch with ">=" greater than?
I have a requirements.txt from a Github repo that contains following lines:
torch>=1.7.0,!=1.12.0
torchvision>=0.8.1,!=0.13.0
// and more
As I search SO and google they say I need to install pytorch with cuda specified, e.g. +cu110; in order to enable GPU and use the installed cuda.
So, for example this command does work on CLI: pip install torch==1.7.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html
But the problem is with requirements.txt.
I looked Install PyTorch from requirements.txt - Stack Overflow and tried some solutions but they didn't work as the following.
// simply added `+cu110`
// didn't work
torch>=1.7.0+cu110,!=1.12.0
torchvision>=0.8.1+cu110,!=0.13.0
// w/ --extra-index-url
// didn't work
--extra-index-url https://download.pytorch.org/whl/cu110
torch>=1.7.0+cu110,!=1.12.0
--extra-index-url https://download.pytorch.org/whl/cu110
torchvision>=0.8.1+cu110,!=0.13.0
// w/ -f
// didn't work
-f https://download.pytorch.org/whl/torch_stable.html
torch>=1.7.0+cu110,!=1.12.0
-f https://download.pytorch.org/whl/torch_stable.html
torchvision>=0.8.1+cu110,!=0.13.0
So, is it possible to work with the combination of >= and +cu110 in requirements.txt?
A:
By checking the version history page in PyPi, it seems that "+cu110" should be removed from requirements file:
https://pypi.org/project/torch/#history
| requirements.txt pytorch with ">=" greater than? | I have a requirements.txt from a Github repo that contains following lines:
torch>=1.7.0,!=1.12.0
torchvision>=0.8.1,!=0.13.0
// and more
As I search SO and google they say I need to install pytorch with cuda specified, e.g. +cu110; in order to enable GPU and use the installed cuda.
So, for example this command does work on CLI: pip install torch==1.7.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html
But the problem is with requirements.txt.
I looked Install PyTorch from requirements.txt - Stack Overflow and tried some solutions but they didn't work as the following.
// simply added `+cu110`
// didn't work
torch>=1.7.0+cu110,!=1.12.0
torchvision>=0.8.1+cu110,!=0.13.0
// w/ --extra-index-url
// didn't work
--extra-index-url https://download.pytorch.org/whl/cu110
torch>=1.7.0+cu110,!=1.12.0
--extra-index-url https://download.pytorch.org/whl/cu110
torchvision>=0.8.1+cu110,!=0.13.0
// w/ -f
// didn't work
-f https://download.pytorch.org/whl/torch_stable.html
torch>=1.7.0+cu110,!=1.12.0
-f https://download.pytorch.org/whl/torch_stable.html
torchvision>=0.8.1+cu110,!=0.13.0
So, is it possible to work with the combination of >= and +cu110 in requirements.txt?
| [
"By checking the version history page in PyPi, it seems that \"+cu110\" should be removed from requirements file:\nhttps://pypi.org/project/torch/#history\n"
] | [
0
] | [] | [] | [
"pip",
"pytorch",
"requirements.txt"
] | stackoverflow_0074675397_pip_pytorch_requirements.txt.txt |
Q:
Angular2 router: how to correctly load children modules with their own routing rules
here is my Angular2 app structure:
Here is part of my code. The following is the main module of the Angular2 app, that imports its routing rules and a child module (EdgeModule) and uses some components related to some pages.
app.module.ts
@NgModule({
declarations: [
AppComponent,
PageNotFoundComponent,
LoginComponent
],
imports: [
...
appRouting,
EdgeModule
],
providers: [
appRoutingProviders,
LoginService
],
bootstrap: [AppComponent]
})
export class AppModule {
}
Here is the routing rules for the main module. It have paths to login page and page not found.
app.routing.ts
const appRoutes: Routes = [
{ path: 'login', component: LoginComponent },
{ path: '**', component: PageNotFoundComponent }
];
export const appRoutingProviders: any[] = [];
export const appRouting = RouterModule.forRoot(appRoutes, { useHash: true });
Here is EdgeModule that declares the component that it uses and import its own routing rules and 2 child modules (FirstSectionModule and SecondSectionModule).
edge.module.ts
@NgModule({
declarations: [
EdgeComponent,
SidebarComponent,
TopbarComponent
],
imports: [
...
edgeRouting,
FirstSectionModule,
SecondSectionModule
],
providers: [
AuthGuard
]
})
export class EdgeModule {
}
Here is the routing rules for the module that loads, as you can see, topbar and sidebar components.
edge.routing.ts
Paths['edgePaths'] = {
firstSection: 'firstSection',
secondSection: 'secondSection'
};
const appRoutes: Routes = [
{ path: '', component: EdgeComponent,
canActivate: [AuthGuard],
children: [
{ path: Paths.edgePaths.firstSection, loadChildren: '../somepath/first-section.module#FirstModule' },
{ path: Paths.edgePaths.secondSection, loadChildren: '../someotherpath/second-section.module#SecondModule' },
{ path: '', redirectTo: edgePaths.dashboard, pathMatch: 'full' }
]
}
];
export const edgeRouting = RouterModule.forChild(appRoutes);
Finally, this is one of the two child module, that have its components and imports its routing rules.
first-section.module.ts
@NgModule({
declarations: [
FirstSectionComponent,
SomeComponent
],
imports: [
...
firstSectionRouting
],
providers: [
AuthGuard,
]
})
export class FirstSectionModule {
}
These are the routing rules for the pages (components) of FirstSectionModule
first-section.routing.ts
Paths['firstSectionPaths'] = {
someSubPage: 'some-sub-page',
someOtherSubPage: 'some-other-sub-page'
};
const appRoutes: Routes = [
{
path: '',
children: [
{ path: Paths.firstSectionPaths.someSubPage, component: someSubPageComponent},
{ path: Paths.firstSectionPaths.someOtherSubPage, component: someOtherSubPageComponent},
{ path: '', component: AnagraficheComponent }
]
}
];
export const firstSectionRouting = RouterModule.forChild(appRoutes);
Almost the same happens for second-section.module.ts and second-section.routing.ts files.
When i run the app the first things that load is the page related to FirstSectionComponent, with no sidebar nor topbar.
Can you tell me what's wrong with my code? There are not errors in the console.
A:
You can try this using loadChildren where the homeModule, productModule, aboutModule have their own route rules.
const routes: Routes = [
{ path: 'home', loadChildren: 'app/areas/home/home.module#homeModule' },
{ path: 'product', loadChildren: 'app/areas/product/product.module#ProductModule' },
{ path: 'drawing', loadChildren: 'app/areas/about/about.module#AboutModule' }
];
export const appRouting = RouterModule.forRoot(routes);
and the home route rules will be like
export const RouteConfig: Routes = [
{
path: '',
component: HomeComponent,
canActivate: [AuthGuard],
children: [
{ path: '', component: HomePage },
{ path: 'test/:id', component: Testinfo},
{ path: 'test2/:id', component: Testinfo1},
{ path: 'test3/:id', component: Testinfo2}
]
}
];
this is also known as lazy loading the modules.
{ path: 'lazy', loadChildren: 'lazy/lazy.module#LazyModule' }
There's a few important things to notice here:
We use the property loadChildren instead of component.
We pass a string instead of a symbol to avoid loading the module eagerly.
We define not only the path to the module but the name of the class as well.
There's nothing special about LazyModule other than it has its own routing and a component called LazyComponent.
Check out this awesome tutorial related to this:
https://angular-2-training-book.rangle.io/handout/modules/lazy-loading-module.html
A:
In your app.routing.ts, there are only 2 routes and no route included to navigate to the Main section (as in the diagram). There needs to be a route entry with loadchildren property so it will load the module for the Main section.
routes: Routes = [...
{
path: 'main', loadChildren: '<file path>/<Edge module file name>#EdgeModule'
}
...];
This will load the rest of the modules, components routes and everything insite the EdgeModule.
A:
Not sure if I get the problem correctly, but here is a small code snippet which I used to generate routes dynamically:
app.component.ts:
constructor(private _router: Router) {
}
ngOnInit() {
...
this._router.config[0].children = myService.getRoutes();
this._router.resetConfig(this._router.config);
console.debug('Routes:', this._router.config);
...
}
It is not OOTB solution, but you can get information about current routes.
A:
It's a dependency injection problem.
We don't need to inject FirstSectionModule & SecondSectionModule in the edgeModule & about route we can use inside of FirstSectionModule & SecondSectionModule.
So just removing it from edgeModule will work.
A:
In Angular 13 & 13+ version, we can do like:-
const routes: Routes = [
{
path: "user",
loadChildren: () => import("./user/user.module").then((m) => {m.UserModule})
}
];
| Angular2 router: how to correctly load children modules with their own routing rules | here is my Angular2 app structure:
Here is part of my code. The following is the main module of the Angular2 app, that imports its routing rules and a child module (EdgeModule) and uses some components related to some pages.
app.module.ts
@NgModule({
declarations: [
AppComponent,
PageNotFoundComponent,
LoginComponent
],
imports: [
...
appRouting,
EdgeModule
],
providers: [
appRoutingProviders,
LoginService
],
bootstrap: [AppComponent]
})
export class AppModule {
}
Here is the routing rules for the main module. It have paths to login page and page not found.
app.routing.ts
const appRoutes: Routes = [
{ path: 'login', component: LoginComponent },
{ path: '**', component: PageNotFoundComponent }
];
export const appRoutingProviders: any[] = [];
export const appRouting = RouterModule.forRoot(appRoutes, { useHash: true });
Here is EdgeModule that declares the component that it uses and import its own routing rules and 2 child modules (FirstSectionModule and SecondSectionModule).
edge.module.ts
@NgModule({
declarations: [
EdgeComponent,
SidebarComponent,
TopbarComponent
],
imports: [
...
edgeRouting,
FirstSectionModule,
SecondSectionModule
],
providers: [
AuthGuard
]
})
export class EdgeModule {
}
Here is the routing rules for the module that loads, as you can see, topbar and sidebar components.
edge.routing.ts
Paths['edgePaths'] = {
firstSection: 'firstSection',
secondSection: 'secondSection'
};
const appRoutes: Routes = [
{ path: '', component: EdgeComponent,
canActivate: [AuthGuard],
children: [
{ path: Paths.edgePaths.firstSection, loadChildren: '../somepath/first-section.module#FirstModule' },
{ path: Paths.edgePaths.secondSection, loadChildren: '../someotherpath/second-section.module#SecondModule' },
{ path: '', redirectTo: edgePaths.dashboard, pathMatch: 'full' }
]
}
];
export const edgeRouting = RouterModule.forChild(appRoutes);
Finally, this is one of the two child module, that have its components and imports its routing rules.
first-section.module.ts
@NgModule({
declarations: [
FirstSectionComponent,
SomeComponent
],
imports: [
...
firstSectionRouting
],
providers: [
AuthGuard,
]
})
export class FirstSectionModule {
}
These are the routing rules for the pages (components) of FirstSectionModule
first-section.routing.ts
Paths['firstSectionPaths'] = {
someSubPage: 'some-sub-page',
someOtherSubPage: 'some-other-sub-page'
};
const appRoutes: Routes = [
{
path: '',
children: [
{ path: Paths.firstSectionPaths.someSubPage, component: someSubPageComponent},
{ path: Paths.firstSectionPaths.someOtherSubPage, component: someOtherSubPageComponent},
{ path: '', component: AnagraficheComponent }
]
}
];
export const firstSectionRouting = RouterModule.forChild(appRoutes);
Almost the same happens for second-section.module.ts and second-section.routing.ts files.
When i run the app the first things that load is the page related to FirstSectionComponent, with no sidebar nor topbar.
Can you tell me what's wrong with my code? There are not errors in the console.
| [
"You can try this using loadChildren where the homeModule, productModule, aboutModule have their own route rules.\nconst routes: Routes = [\n { path: 'home', loadChildren: 'app/areas/home/home.module#homeModule' },\n { path: 'product', loadChildren: 'app/areas/product/product.module#ProductModule' },\n { path: 'drawing', loadChildren: 'app/areas/about/about.module#AboutModule' }\n];\n\nexport const appRouting = RouterModule.forRoot(routes);\n\nand the home route rules will be like\nexport const RouteConfig: Routes = [\n {\n path: '',\n component: HomeComponent,\n canActivate: [AuthGuard],\n children: [\n { path: '', component: HomePage },\n { path: 'test/:id', component: Testinfo},\n { path: 'test2/:id', component: Testinfo1},\n { path: 'test3/:id', component: Testinfo2}\n ]\n }\n];\n\nthis is also known as lazy loading the modules.\n{ path: 'lazy', loadChildren: 'lazy/lazy.module#LazyModule' }\n\nThere's a few important things to notice here:\nWe use the property loadChildren instead of component.\nWe pass a string instead of a symbol to avoid loading the module eagerly.\nWe define not only the path to the module but the name of the class as well.\nThere's nothing special about LazyModule other than it has its own routing and a component called LazyComponent.\nCheck out this awesome tutorial related to this:\nhttps://angular-2-training-book.rangle.io/handout/modules/lazy-loading-module.html\n",
"In your app.routing.ts, there are only 2 routes and no route included to navigate to the Main section (as in the diagram). There needs to be a route entry with loadchildren property so it will load the module for the Main section. \nroutes: Routes = [...\n{ \npath: 'main', loadChildren: '<file path>/<Edge module file name>#EdgeModule' \n}\n...];\n\nThis will load the rest of the modules, components routes and everything insite the EdgeModule.\n",
"Not sure if I get the problem correctly, but here is a small code snippet which I used to generate routes dynamically:\napp.component.ts:\nconstructor(private _router: Router) {\n}\n\nngOnInit() {\n ...\n this._router.config[0].children = myService.getRoutes(); \n this._router.resetConfig(this._router.config);\n console.debug('Routes:', this._router.config);\n ...\n}\n\nIt is not OOTB solution, but you can get information about current routes.\n",
"It's a dependency injection problem.\nWe don't need to inject FirstSectionModule & SecondSectionModule in the edgeModule & about route we can use inside of FirstSectionModule & SecondSectionModule.\nSo just removing it from edgeModule will work.\n",
"In Angular 13 & 13+ version, we can do like:-\nconst routes: Routes = [\n {\n path: \"user\",\n loadChildren: () => import(\"./user/user.module\").then((m) => {m.UserModule})\n }\n];\n\n"
] | [
8,
1,
0,
0,
0
] | [] | [] | [
"angular",
"angular2_modules",
"angular2_routing",
"nested_routes",
"url_routing"
] | stackoverflow_0040110827_angular_angular2_modules_angular2_routing_nested_routes_url_routing.txt |
Q:
List row dividers broken by iOS 16
I have a list of section headers, each with 1 or more rows.
Since updating to iOS 16 the row divider lines have been pushed to the right (as in the 1st screenshot).
When running on iOS 15.7 the row dividers are ok (as in 2nd screenshot).
The minimum targeted OS for my app is iOS 15.5
Here is my code (I've only included 1st section header for brevity):
var videoGuideRight: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.18
default:
return 0.2
}
}
var contactRight: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.04
default:
return 0.12
}
}
var contactLeft: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.255
default:
return 0.27
}
}
var contactButtonWidth: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 1/4.25
default:
return 1/5
}
}
var contactFrameWidth: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.175
default:
return 0.15
}
}
var body: some View {
NavigationView {
VStack {
List {
Section(header: Text("Support")) {
HStack {
Image("about")
Text("About")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * 0.65, height: 15, alignment: .center)
NavigationLink(destination: AboutView()) { EmptyView() }
}
HStack {
Image("userGuide")
Text("Handbook")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * 0.65, height: 15, alignment: .center)
NavigationLink(destination: UserGuideView()) { EmptyView() }
}
HStack {
Image("videoGuide")
Link(destination: URL(string: "https://www.tirnaelectronics.co.uk/polylingo-guide")!) { }
Spacer().frame(width: UIScreen.main.bounds.width * 0.04, height: nil, alignment: .center)
Text("Video Guide")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * 0.3, height: 15, alignment: .leading)
Spacer().frame(width: UIScreen.main.bounds.width * videoGuideRight, height: nil, alignment: .center)
}
HStack {
Image("contact")
Spacer().frame(width: UIScreen.main.bounds.width * contactLeft, height: nil, alignment: .center)
Text("Contact")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * contactFrameWidth, height: 15, alignment: .center)
Spacer().frame(width: UIScreen.main.bounds.width * contactRight, height: nil, alignment: .center)
Text("E-mail")
.fontWeight(.bold)
.frame(width: screenSize.width * contactButtonWidth, height: 20, alignment: .center)
.font(.footnote)
.padding(8)
.background(Color.systemBlue)
.cornerRadius(5)
.foregroundColor(.white)
.overlay(
RoundedRectangle(cornerRadius: 5)
.stroke(Color.systemBlue, lineWidth: 2)
)
.onTapGesture{ mailto() }
}
}
}
.navigationBarTitle("More", displayMode: .inline).opacity(0.8)
.listStyle(InsetGroupedListStyle())
.background(Color.init(.systemGroupedBackground))
if resetScoresPresented {
ResetScoresAlert(isShown: $resetScoresPresented, title: "Are you sure?", message: "All test progress will be lost. This cannot be undone!", onOK: { reset in
if reset {
resetTests()
}
})
}
if noEmailAlertPresented {
NoEmailAlert(showAlert: noEmailAlertPresented)
}
}
}
}
A:
I just ran into the same problem. Try to use
.alignmentGuide(.listRowSeparatorLeading) { _ in 0 }
for your sections (source: https://sarunw.com/posts/swiftui-list-row-separator-insets/)
| List row dividers broken by iOS 16 | I have a list of section headers, each with 1 or more rows.
Since updating to iOS 16 the row divider lines have been pushed to the right (as in the 1st screenshot).
When running on iOS 15.7 the row dividers are ok (as in 2nd screenshot).
The minimum targeted OS for my app is iOS 15.5
Here is my code (I've only included 1st section header for brevity):
var videoGuideRight: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.18
default:
return 0.2
}
}
var contactRight: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.04
default:
return 0.12
}
}
var contactLeft: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.255
default:
return 0.27
}
}
var contactButtonWidth: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 1/4.25
default:
return 1/5
}
}
var contactFrameWidth: CGFloat {
switch UIDevice.current.name {
case "iPhone SE (1st generation)", "iPod touch (7th generation)":
return 0.175
default:
return 0.15
}
}
var body: some View {
NavigationView {
VStack {
List {
Section(header: Text("Support")) {
HStack {
Image("about")
Text("About")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * 0.65, height: 15, alignment: .center)
NavigationLink(destination: AboutView()) { EmptyView() }
}
HStack {
Image("userGuide")
Text("Handbook")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * 0.65, height: 15, alignment: .center)
NavigationLink(destination: UserGuideView()) { EmptyView() }
}
HStack {
Image("videoGuide")
Link(destination: URL(string: "https://www.tirnaelectronics.co.uk/polylingo-guide")!) { }
Spacer().frame(width: UIScreen.main.bounds.width * 0.04, height: nil, alignment: .center)
Text("Video Guide")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * 0.3, height: 15, alignment: .leading)
Spacer().frame(width: UIScreen.main.bounds.width * videoGuideRight, height: nil, alignment: .center)
}
HStack {
Image("contact")
Spacer().frame(width: UIScreen.main.bounds.width * contactLeft, height: nil, alignment: .center)
Text("Contact")
.font(.system(size: 15))
.frame(width: UIScreen.main.bounds.width * contactFrameWidth, height: 15, alignment: .center)
Spacer().frame(width: UIScreen.main.bounds.width * contactRight, height: nil, alignment: .center)
Text("E-mail")
.fontWeight(.bold)
.frame(width: screenSize.width * contactButtonWidth, height: 20, alignment: .center)
.font(.footnote)
.padding(8)
.background(Color.systemBlue)
.cornerRadius(5)
.foregroundColor(.white)
.overlay(
RoundedRectangle(cornerRadius: 5)
.stroke(Color.systemBlue, lineWidth: 2)
)
.onTapGesture{ mailto() }
}
}
}
.navigationBarTitle("More", displayMode: .inline).opacity(0.8)
.listStyle(InsetGroupedListStyle())
.background(Color.init(.systemGroupedBackground))
if resetScoresPresented {
ResetScoresAlert(isShown: $resetScoresPresented, title: "Are you sure?", message: "All test progress will be lost. This cannot be undone!", onOK: { reset in
if reset {
resetTests()
}
})
}
if noEmailAlertPresented {
NoEmailAlert(showAlert: noEmailAlertPresented)
}
}
}
}
| [
"I just ran into the same problem. Try to use\n.alignmentGuide(.listRowSeparatorLeading) { _ in 0 }\n\nfor your sections (source: https://sarunw.com/posts/swiftui-list-row-separator-insets/)\n"
] | [
0
] | [] | [] | [
"divider",
"list",
"row",
"swift",
"swiftui"
] | stackoverflow_0073820124_divider_list_row_swift_swiftui.txt |
Q:
Losing cell formats when accessing rows
In some circumstances the format (int, float, etc) of a cell is lost when accessing via its row.
In that example the first column has integers and the second floats. But the 111 is converted into 111.0.
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
# A 111.0
# B 1.3
# Name: 0, dtype: float64
print(dfA.loc[0])
# <class 'numpy.float64'>
print(type(dfA.loc[0].A))
The output I would expect is like this
A 111
B 1.3
<class 'numpy.int64'>
I have an idea why this happens. But IMHO this isn't user friendly. Can I solve this somehow? The goal is to access (e.g. read) each cells value without loseing its format.
In the full code below you can also see it is possible when one of the columns is of type string. Wired.
Minimal Working Example
#!/usr/bin/env python3
import pandas
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
print(dfA)
dfB = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
'C': ['one', 'two', 'three']
})
print(dfB)
print(dfA.loc[0])
print(type(dfA.loc[0].A))
print(dfB.loc[0])
print(type(dfB.loc[0].A))
Output
A B
0 111 1.3
1 222 2.4
2 333 3.5
A B C
0 111 1.3 one
1 222 2.4 two
2 333 3.5 three
A 111.0
B 1.3
Name: 0, dtype: float64
<class 'numpy.float64'>
A 111
B 1.3
C one
Name: 0, dtype: object
<class 'numpy.int64'>
A:
If you want to access a specific value in the DataFrame without losing its data type, you can use the at method instead of the loc method. The at method accesses a scalar value in the DataFrame, so it will preserve the data type of the value. See: print(type(dfA.at[0, 'A']))
In this example, the at method is used to access the value in the first row and first column of the DataFrame. This value is an integer, so the at method returns it as an integer, preserving its data type.
| Losing cell formats when accessing rows | In some circumstances the format (int, float, etc) of a cell is lost when accessing via its row.
In that example the first column has integers and the second floats. But the 111 is converted into 111.0.
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
# A 111.0
# B 1.3
# Name: 0, dtype: float64
print(dfA.loc[0])
# <class 'numpy.float64'>
print(type(dfA.loc[0].A))
The output I would expect is like this
A 111
B 1.3
<class 'numpy.int64'>
I have an idea why this happens. But IMHO this isn't user friendly. Can I solve this somehow? The goal is to access (e.g. read) each cells value without loseing its format.
In the full code below you can also see it is possible when one of the columns is of type string. Wired.
Minimal Working Example
#!/usr/bin/env python3
import pandas
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
print(dfA)
dfB = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
'C': ['one', 'two', 'three']
})
print(dfB)
print(dfA.loc[0])
print(type(dfA.loc[0].A))
print(dfB.loc[0])
print(type(dfB.loc[0].A))
Output
A B
0 111 1.3
1 222 2.4
2 333 3.5
A B C
0 111 1.3 one
1 222 2.4 two
2 333 3.5 three
A 111.0
B 1.3
Name: 0, dtype: float64
<class 'numpy.float64'>
A 111
B 1.3
C one
Name: 0, dtype: object
<class 'numpy.int64'>
| [
"If you want to access a specific value in the DataFrame without losing its data type, you can use the at method instead of the loc method. The at method accesses a scalar value in the DataFrame, so it will preserve the data type of the value. See: print(type(dfA.at[0, 'A']))\nIn this example, the at method is used to access the value in the first row and first column of the DataFrame. This value is an integer, so the at method returns it as an integer, preserving its data type.\n"
] | [
1
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074677068_numpy_pandas_python.txt |
Q:
Azure DevOps Pipeline - Configure Unit Tests for UWP Project
I'm pretty new to Azure DevOps and I am trying to get it to run the Unit tests (MSTest) as part of the pipeline. I'm using the default generated yaml for UWP. According to the documentation for unit tests I should have something like:
- task: VSTest@1
displayName: Unit tests
inputs:
testAssembly: '**/*test*.dll;-:**\obj\**
This is a high level of the file structure in question (relative to the yaml file):
Pipeline.yml
Project (folder)
Project.sln
ProjectDatabase (folder)
bin (folder)
obj (folder)
ProjectDatabase.csproj
ProjectDatabase.Test (folder)
bin (folder)
obj (folder)
ProjectDatabase.Text.csproj
ProjectDataAccess (folder)
bin (folder)
obj (folder)
ProjectDataAccess.csproj
ProjectDataAccess.Test (folder)
bin (folder)
obj (folder)
ProjectDataAccess.Test.csproj
Each time I've tried varying the path but running the Pipeline just returns:
##[warning]No test assemblies found matching the pattern: '**/**/*test*.dll;-:**\**\obj\**'.
Am I even going down the right path and if so, am I missing something? Thanks in advance and I greatly appreciate any assistance.
A:
It looks like you are on the right track, but it seems like the pattern you are using to match your test assemblies is not correct. The pattern you are using is looking for files with "test" in the name, but it looks like your test assemblies do not have "test" in their names.
Try changing the pattern to the following:
**/*.Test.dll;-:**\obj\**
This pattern will match assemblies with ".Test" in their name, which should match your test assemblies.
It's also possible that your test assemblies are not being built as part of the pipeline. Make sure that your solution builds successfully and that your test assemblies are being produced in the bin folders.
You can verify this by checking the output of the pipeline to see if the test assemblies are being produced. If they are not being produced, you may need to adjust your build steps to ensure that they are being built.
I hope this helps! Let me know if you have any other questions.
| Azure DevOps Pipeline - Configure Unit Tests for UWP Project | I'm pretty new to Azure DevOps and I am trying to get it to run the Unit tests (MSTest) as part of the pipeline. I'm using the default generated yaml for UWP. According to the documentation for unit tests I should have something like:
- task: VSTest@1
displayName: Unit tests
inputs:
testAssembly: '**/*test*.dll;-:**\obj\**
This is a high level of the file structure in question (relative to the yaml file):
Pipeline.yml
Project (folder)
Project.sln
ProjectDatabase (folder)
bin (folder)
obj (folder)
ProjectDatabase.csproj
ProjectDatabase.Test (folder)
bin (folder)
obj (folder)
ProjectDatabase.Text.csproj
ProjectDataAccess (folder)
bin (folder)
obj (folder)
ProjectDataAccess.csproj
ProjectDataAccess.Test (folder)
bin (folder)
obj (folder)
ProjectDataAccess.Test.csproj
Each time I've tried varying the path but running the Pipeline just returns:
##[warning]No test assemblies found matching the pattern: '**/**/*test*.dll;-:**\**\obj\**'.
Am I even going down the right path and if so, am I missing something? Thanks in advance and I greatly appreciate any assistance.
| [
"It looks like you are on the right track, but it seems like the pattern you are using to match your test assemblies is not correct. The pattern you are using is looking for files with \"test\" in the name, but it looks like your test assemblies do not have \"test\" in their names.\nTry changing the pattern to the following:\n**/*.Test.dll;-:**\\obj\\**\n\nThis pattern will match assemblies with \".Test\" in their name, which should match your test assemblies.\nIt's also possible that your test assemblies are not being built as part of the pipeline. Make sure that your solution builds successfully and that your test assemblies are being produced in the bin folders.\nYou can verify this by checking the output of the pipeline to see if the test assemblies are being produced. If they are not being produced, you may need to adjust your build steps to ensure that they are being built.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
0
] | [] | [] | [
"azure",
"devops",
"pipeline",
"unit_testing",
"uwp"
] | stackoverflow_0074660733_azure_devops_pipeline_unit_testing_uwp.txt |
Q:
How to arrange all the alphabets in my name in sorted manner?
How to arrange alphabets in myname in sorted manner?
I have used the sort function but that din't work and solve it?
A:
Like this?
import re
my_name = "Mohammed Sardar Saajit"
my_name_with_the_letters_sorted = sorted([character for character in re.sub(r"[^\w]", "", my_name.lower())], key=ord)
print(my_name_with_the_letters_sorted)
['a', 'a', 'a', 'a', 'a', 'd', 'd', 'e', 'h', 'i', 'j', 'm', 'm', 'm', 'o', 'r', 'r', 's', 's', 't']
| How to arrange all the alphabets in my name in sorted manner? | How to arrange alphabets in myname in sorted manner?
I have used the sort function but that din't work and solve it?
| [
"Like this?\nimport re\nmy_name = \"Mohammed Sardar Saajit\"\nmy_name_with_the_letters_sorted = sorted([character for character in re.sub(r\"[^\\w]\", \"\", my_name.lower())], key=ord)\nprint(my_name_with_the_letters_sorted)\n\n['a', 'a', 'a', 'a', 'a', 'd', 'd', 'e', 'h', 'i', 'j', 'm', 'm', 'm', 'o', 'r', 'r', 's', 's', 't']\n\n"
] | [
0
] | [] | [] | [
"python",
"sorting",
"word"
] | stackoverflow_0074676020_python_sorting_word.txt |
Q:
Error (task exception was never retrieved) when running discord bot commands
I am making a discord bot using python, and I have run into an unexplainable error which I am unable to fix. I thought I fixed it by deleting the checks but I'm completely stumped by the massive block of errors I'm getting.
If anyone could please decode even some of this, I would be greatly appreciative.
Task exception was never retrieved
future: <Task finished name='CommandTree-invoker' coro=<CommandTree._from_interaction..wrapper() done, defined at C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py:1089> exception=OperationalError('no such table: blacklist')>
Traceback (most recent call last):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1091, in wrapper
await self._call(interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1242, in _call
await command._invoke_with_namespace(interaction, namespace)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 436, in _invoke_with_namespace
await command.prepare(ctx)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\core.py", line 919, in prepare
if not await self.can_run(ctx):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 524, in can_run
return await self.app_command._check_can_run(ctx.interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 418, in _check_can_run
if self.wrapped.checks and not await async_all(f(ctx) for f in self.wrapped.checks):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\utils.py", line 674, in async_all
elem = await elem
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\checks.py", line 31, in predicate
if await db_manager.is_blacklisted(context.author.id):
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\db_manager.py", line 8, in is_blacklisted
async with db.execute("SELECT * FROM blacklist WHERE user_id=?", (user_id,)) as cursor:
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\context.py", line 41, in aenter
self._obj = await self._coro
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 184, in execute
cursor = await self._execute(self._conn.execute, sql, parameters)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 129, in _execute
return await future
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: blacklist
A:
In the file C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\db_manager.py, line 8
async with db.execute("SELECT * FROM blacklist WHERE user_id=?", (user_id,)) as cursor:
Error message
sqlite3.OperationalError: no such table: blacklist
This error means that there's no table named blacklist in the database; you may want to create it before accessing it.
| Error (task exception was never retrieved) when running discord bot commands | I am making a discord bot using python, and I have run into an unexplainable error which I am unable to fix. I thought I fixed it by deleting the checks but I'm completely stumped by the massive block of errors I'm getting.
If anyone could please decode even some of this, I would be greatly appreciative.
Task exception was never retrieved
future: <Task finished name='CommandTree-invoker' coro=<CommandTree._from_interaction..wrapper() done, defined at C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py:1089> exception=OperationalError('no such table: blacklist')>
Traceback (most recent call last):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1091, in wrapper
await self._call(interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1242, in _call
await command._invoke_with_namespace(interaction, namespace)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 436, in _invoke_with_namespace
await command.prepare(ctx)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\core.py", line 919, in prepare
if not await self.can_run(ctx):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 524, in can_run
return await self.app_command._check_can_run(ctx.interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 418, in _check_can_run
if self.wrapped.checks and not await async_all(f(ctx) for f in self.wrapped.checks):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\utils.py", line 674, in async_all
elem = await elem
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\checks.py", line 31, in predicate
if await db_manager.is_blacklisted(context.author.id):
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\db_manager.py", line 8, in is_blacklisted
async with db.execute("SELECT * FROM blacklist WHERE user_id=?", (user_id,)) as cursor:
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\context.py", line 41, in aenter
self._obj = await self._coro
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 184, in execute
cursor = await self._execute(self._conn.execute, sql, parameters)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 129, in _execute
return await future
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: blacklist
| [
"In the file C:\\Users\\mitsuk\\Documents\\rirakkumabot\\main\\helpers\\db_manager.py, line 8\nasync with db.execute(\"SELECT * FROM blacklist WHERE user_id=?\", (user_id,)) as cursor:\n\nError message\nsqlite3.OperationalError: no such table: blacklist\n\nThis error means that there's no table named blacklist in the database; you may want to create it before accessing it.\n"
] | [
0
] | [] | [] | [
"discord.py",
"python"
] | stackoverflow_0074675786_discord.py_python.txt |
Q:
How to improve performance - Merge two dataframes by closest geodetic distance
I have two dataframes, one radar which represents data on an equispaced grid with columns for longitude, latitude and height value, and one ice that has some information related to satellite observations, including the latitude and longitude of the observation. I want to merge the two so I can get ice with the 'height' column from radar, based on the geodetic distance point from each ice row to the closest radar point.
I'm currently doing it like this:
from geopy.distance import geodesic
import pandas as pd
def get_distance(out):
global radar
dists = radar['latlon'].apply(lambda x: geodesic(out['latlon'], x).km)
out['dist to radar']=min(dists)
out['rate_yr_radar']=radar.loc[dists.idxmin()]['rate_yr_radar']
return out
ICEvsRadar=ice.apply(get_distance, axis=1)
But it's very slow, I have around 200 points in my ice dataframe and around 50000 on the radar one. Is a slow performance to be expected based on the computational cost of calculating each distance, or could I improve something in how I apply the function?
edit: uploaded the example data on https://wetransfer.com/downloads/284036652e682a3e665994d360a3068920221203230651/5842f2
The code takes around 25 minutes to run, ice has lon, lat and latlon fields and is 180 rows long, and radar has 50000 rows with lon, lat, latlon and rate_yr_radar fields
A:
below code takes less than a second on my machine. Probably not working around equator/greenwich
import pandas as pd
import numpy as np
from scipy.spatial import KDTree
#reading data
radar = pd.read_csv("radar.csv")
ice = pd.read_csv("ice.csv")
#extrating points data
pts = np.array(radar.loc[:, ["lon", "lat"]])
#building tree
Tree = KDTree(pts)
#quering the nearest neighbour
distance, index = Tree.query(ice.loc[:, ["lon", "lat"]])
#getting relevant data from ice
reduced_radar = radar.loc[index, ["rate_yr_radar"]]
reduced_radar = reduced_radar.reset_index().rename({"index": "index_from_radar"}, axis=1)
#joining data
ice = ice.join(reduced_radar)
alternatively one could look at https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.sjoin_nearest.html
| How to improve performance - Merge two dataframes by closest geodetic distance | I have two dataframes, one radar which represents data on an equispaced grid with columns for longitude, latitude and height value, and one ice that has some information related to satellite observations, including the latitude and longitude of the observation. I want to merge the two so I can get ice with the 'height' column from radar, based on the geodetic distance point from each ice row to the closest radar point.
I'm currently doing it like this:
from geopy.distance import geodesic
import pandas as pd
def get_distance(out):
global radar
dists = radar['latlon'].apply(lambda x: geodesic(out['latlon'], x).km)
out['dist to radar']=min(dists)
out['rate_yr_radar']=radar.loc[dists.idxmin()]['rate_yr_radar']
return out
ICEvsRadar=ice.apply(get_distance, axis=1)
But it's very slow, I have around 200 points in my ice dataframe and around 50000 on the radar one. Is a slow performance to be expected based on the computational cost of calculating each distance, or could I improve something in how I apply the function?
edit: uploaded the example data on https://wetransfer.com/downloads/284036652e682a3e665994d360a3068920221203230651/5842f2
The code takes around 25 minutes to run, ice has lon, lat and latlon fields and is 180 rows long, and radar has 50000 rows with lon, lat, latlon and rate_yr_radar fields
| [
"below code takes less than a second on my machine. Probably not working around equator/greenwich\nimport pandas as pd\nimport numpy as np\nfrom scipy.spatial import KDTree\n\n#reading data\nradar = pd.read_csv(\"radar.csv\")\nice = pd.read_csv(\"ice.csv\")\n\n#extrating points data\npts = np.array(radar.loc[:, [\"lon\", \"lat\"]])\n\n#building tree\nTree = KDTree(pts)\n\n#quering the nearest neighbour\ndistance, index = Tree.query(ice.loc[:, [\"lon\", \"lat\"]])\n\n#getting relevant data from ice\nreduced_radar = radar.loc[index, [\"rate_yr_radar\"]]\nreduced_radar = reduced_radar.reset_index().rename({\"index\": \"index_from_radar\"}, axis=1)\n\n#joining data\nice = ice.join(reduced_radar)\n\nalternatively one could look at https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.sjoin_nearest.html\n"
] | [
1
] | [] | [] | [
"dataframe",
"distance",
"merge",
"pandas",
"python"
] | stackoverflow_0074669645_dataframe_distance_merge_pandas_python.txt |
Q:
Traefik IngressRoute CRD not Registering Any Routes
I'm configuring Traefik Proxy to run on a GKE cluster to handle proxying to various microservices. I'm doing everything through their CRDs and deployed Traefik to the cluster using a custom deployment. The Traefik dashboard is accessible and working fine, however when I try to setup an IngressRoute for the service itself, it is not accessible and it does not appear in the dashboard. I've tried setting it up with a regular k8s Ingress object and when doing that, it did appear in the dashboard, however I ran into some issues with middleware, and for ease-of-use I'd prefer to go the CRD route. Also, the deployment and service for the microservice seem to be deploying fine, they both appear in the GKE dashboard and are running normally. No ingress is created, however I'm unsure of if a custom CRD IngressRoute is supposed to create one or not.
Some information about the configuration:
I'm using Kustomize to handle overlays and general data
I have a setting through kustomize to apply the namespace users to everything
Below are the config files I'm using, and the CRDs and RBAC are defined by calling
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: ${IMAGE}
imagePullPolicy: IfNotPresent
ports:
- name: web
containerPort: ${HTTP_PORT}
readinessProbe:
httpGet:
path: /ready
port: web
initialDelaySeconds: 10
periodSeconds: 2
envFrom:
- secretRef:
name: users-service-env-secrets
service.yml
apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: users-service
ingress.yml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: users-stripprefix
spec:
stripPrefix:
prefixes:
- /userssrv
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users-service-ingress
spec:
entryPoints:
- service-port
routes:
- kind: Rule
match: PathPrefix(`/userssrv`)
services:
- name: users-service
namespace: users
port: service-port
middlewares:
- name: users-stripprefix
If any more information is needed, just lmk. Thanks!
A:
A default Traefik installation on Kubernetes creates two entrypoints:
web for http access, and
websecure for https access
But you have in your IngressRoute configuration:
entryPoints:
- service-port
Unless you have explicitly configured Traefik with an entrypoint named "service-port", this is probably your problem. You want to remove the entryPoints section, or specify something like:
entryPoints:
- web
If you omit the entryPoints configuration, the service will be available on all entrypoints. If you include explicit entrypoints, then the service will only be available on those specific entrypoints (e.g. with the above configuration, the service would be available via http:// and not via https://).
Not directly related to your problem, but if you're using Kustomize, consider:
Drop the app: users-service label from the deployment, the service selector, etc, and instead set that in your kustomization.yaml using the commonLabels directive.
Drop the explicit namespace from the service specification in your IngressRoute and instead use kustomize's namespace transformer to set it (this lets you control the namespace exclusively from your kustomization.yaml).
I've put together a deployable example with all the changes mentioned in this answer here.
| Traefik IngressRoute CRD not Registering Any Routes | I'm configuring Traefik Proxy to run on a GKE cluster to handle proxying to various microservices. I'm doing everything through their CRDs and deployed Traefik to the cluster using a custom deployment. The Traefik dashboard is accessible and working fine, however when I try to setup an IngressRoute for the service itself, it is not accessible and it does not appear in the dashboard. I've tried setting it up with a regular k8s Ingress object and when doing that, it did appear in the dashboard, however I ran into some issues with middleware, and for ease-of-use I'd prefer to go the CRD route. Also, the deployment and service for the microservice seem to be deploying fine, they both appear in the GKE dashboard and are running normally. No ingress is created, however I'm unsure of if a custom CRD IngressRoute is supposed to create one or not.
Some information about the configuration:
I'm using Kustomize to handle overlays and general data
I have a setting through kustomize to apply the namespace users to everything
Below are the config files I'm using, and the CRDs and RBAC are defined by calling
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: ${IMAGE}
imagePullPolicy: IfNotPresent
ports:
- name: web
containerPort: ${HTTP_PORT}
readinessProbe:
httpGet:
path: /ready
port: web
initialDelaySeconds: 10
periodSeconds: 2
envFrom:
- secretRef:
name: users-service-env-secrets
service.yml
apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: users-service
ingress.yml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: users-stripprefix
spec:
stripPrefix:
prefixes:
- /userssrv
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users-service-ingress
spec:
entryPoints:
- service-port
routes:
- kind: Rule
match: PathPrefix(`/userssrv`)
services:
- name: users-service
namespace: users
port: service-port
middlewares:
- name: users-stripprefix
If any more information is needed, just lmk. Thanks!
| [
"A default Traefik installation on Kubernetes creates two entrypoints:\n\nweb for http access, and\nwebsecure for https access\n\nBut you have in your IngressRoute configuration:\nentryPoints:\n - service-port\n\nUnless you have explicitly configured Traefik with an entrypoint named \"service-port\", this is probably your problem. You want to remove the entryPoints section, or specify something like:\nentryPoints:\n - web\n\nIf you omit the entryPoints configuration, the service will be available on all entrypoints. If you include explicit entrypoints, then the service will only be available on those specific entrypoints (e.g. with the above configuration, the service would be available via http:// and not via https://).\n\nNot directly related to your problem, but if you're using Kustomize, consider:\n\nDrop the app: users-service label from the deployment, the service selector, etc, and instead set that in your kustomization.yaml using the commonLabels directive.\n\nDrop the explicit namespace from the service specification in your IngressRoute and instead use kustomize's namespace transformer to set it (this lets you control the namespace exclusively from your kustomization.yaml).\n\n\nI've put together a deployable example with all the changes mentioned in this answer here.\n"
] | [
1
] | [] | [] | [
"kubernetes",
"traefik",
"traefik_ingress"
] | stackoverflow_0074672718_kubernetes_traefik_traefik_ingress.txt |
Q:
How can I check if 24 hours have passed between two dates in kotlin?
I need to check if 24 hours have passed between two dates. If 24 hours have passed, I will update the data in my room database, if not, I will not. How can I do that. I thought about keeping a date with shared preferences and comparing that date with the current date, but I don't know how to do that.
A:
You need to check that the different between the date stored in shared pref as an Long and the current date are bigger than or equal the value of 1 day in millis
var isDayPassed = (System.currentTimeMillis() - date) >= TimeUnit.DAYS.toMillis(1)
Note: Make sure to import TimeUnit using import java.util.concurrent.TimeUnit
A:
Here is an example of how you could check if 24 hours have passed between two dates in Android using the java.util.Date class:
// Get the current date and time
Date currentDate = new Date();
// Get the date and time from the shared preferences
// Replace "dateKey" with the key for the date in the shared preferences
SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(context);
long savedDateMillis = preferences.getLong("dateKey", 0);
Date savedDate = new Date(savedDateMillis);
// Calculate the difference between the current date and the saved date in milliseconds
long timeDifference = currentDate.getTime() - savedDate.getTime();
// Convert the time difference from milliseconds to hours
long timeDifferenceHours = TimeUnit.MILLISECONDS.toHours(timeDifference);
// Check if 24 hours have passed
if (timeDifferenceHours >= 24) {
// 24 hours or more have passed, so update the data in the room database
// ...
// Save the current date and time in the shared preferences
SharedPreferences.Editor editor = preferences.edit();
editor.putLong("dateKey", currentDate.getTime());
editor.apply();
} else {
// Less than 24 hours have passed, so do not update the data in the room database
// ...
}
In this code, we use the java.util.Date class to get the current date and time and the date and time from the shared preferences. We then calculate the difference between the two dates in milliseconds and convert it to hours. Finally, we check if 24 hours or more have passed, and if so, we update the data in the room database and save the current date and time in the shared preferences.
I hope this helps! Let me know if you have any questions.
| How can I check if 24 hours have passed between two dates in kotlin? | I need to check if 24 hours have passed between two dates. If 24 hours have passed, I will update the data in my room database, if not, I will not. How can I do that. I thought about keeping a date with shared preferences and comparing that date with the current date, but I don't know how to do that.
| [
"You need to check that the different between the date stored in shared pref as an Long and the current date are bigger than or equal the value of 1 day in millis\nvar isDayPassed = (System.currentTimeMillis() - date) >= TimeUnit.DAYS.toMillis(1)\n\n\nNote: Make sure to import TimeUnit using import java.util.concurrent.TimeUnit \n",
"Here is an example of how you could check if 24 hours have passed between two dates in Android using the java.util.Date class:\n// Get the current date and time\nDate currentDate = new Date();\n// Get the date and time from the shared preferences\n// Replace \"dateKey\" with the key for the date in the shared preferences\nSharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(context);\nlong savedDateMillis = preferences.getLong(\"dateKey\", 0);\nDate savedDate = new Date(savedDateMillis);\n\n// Calculate the difference between the current date and the saved date in milliseconds\nlong timeDifference = currentDate.getTime() - savedDate.getTime();\n\n// Convert the time difference from milliseconds to hours\nlong timeDifferenceHours = TimeUnit.MILLISECONDS.toHours(timeDifference);\n\n// Check if 24 hours have passed\nif (timeDifferenceHours >= 24) {\n // 24 hours or more have passed, so update the data in the room database\n // ...\n\n // Save the current date and time in the shared preferences\n SharedPreferences.Editor editor = preferences.edit();\n editor.putLong(\"dateKey\", currentDate.getTime());\n editor.apply();\n} else {\n // Less than 24 hours have passed, so do not update the data in the room database\n // ...\n}\n\nIn this code, we use the java.util.Date class to get the current date and time and the date and time from the shared preferences. We then calculate the difference between the two dates in milliseconds and convert it to hours. Finally, we check if 24 hours or more have passed, and if so, we update the data in the room database and save the current date and time in the shared preferences.\nI hope this helps! Let me know if you have any questions.\n"
] | [
1,
1
] | [] | [] | [
"date",
"date_difference",
"datetime",
"kotlin"
] | stackoverflow_0074675664_date_date_difference_datetime_kotlin.txt |
Q:
Why is imported typescript function not available in Chrome Debugger
I'm trying to debug a method that I have imported into an Angular componenent. However some scope peculiarities around typescript are meaning that I cannot access an imported method via the debugger.
The method is the formatDuration method from date-fns. I want to be able to debug the method directly in the debugger however for some reason the method can't be accessed in the debugger and is always undefined.
import { Component } from '@angular/core'
import { formatDuration } from 'date-fns' // <= method imported
...
export class EntryComponent {
duration:integer = 1000
constructor() { }
get duration():string{
let duration_str = formatDuration({ seconds: this.duration },
{format: ['hours', 'minutes']}
)
// I want to be able to jump in here and use the `formatDuration` method:
debugger // <= debug statement
return duration_str
}
}
The debugger won't recognise the method when I try to call it
When I run the code above and attempt to call the formatDuration method in the console I get an error:
The method IS available to the code that's carrying it
I can't call the method using the debugger however if I remove the debugger statement it is being called successfully. For some reason, it's out of scope of the debugger though ♂️
Copying the method to a local variable makes it available...
get duration():string{
let duration_str = formatDuration({ seconds: this.duration },
{format: ['hours', 'minutes']}
)
// make a copy -------------
let myVersion:any = formatDuration;
// -------------------------
debugger // <= debug statement
return duration_str
}
Running myVersion in the console now returns the function as expected:
Stackblitz demonstration
Here's a stackblitz app that shows the problem. Open your debugger before loading the page and then follow the instructions just before the debug line. The source code for the Stackblitz page is here.
What's happening with scopes such that I can't access the imported method directly?
A:
You should check the compiled JS file, because this is what the console will target. This generally depends on the target in your tsconfig.json and/or the packaging system as well. Because you are using angular, the packaging is done with webpack. You can find your formatDuration function somewhere here:
_date-fns__WEBPACK_IMPORTED_MODULE_1__.formatDuration
The 1 can also be 2 or 0 or 100, you will have to check the closure in the Scope section of the debugger. It's usually in the closure which has as name the typescript file you are debugging. For example:
Closure (./src/app/components/entry.component.ts)
For example, here you can see the imports from a certain service in my application:
Obviously, if you use --prod, this will all be minified, and will make things a lot harder to trace :)
A:
A bit easier method of using an imported module, to add on Poul Kruijt's answer:
Create a breakpoint in the function that uses that module,
in the devtools in the breakpoints panel, go to Scope, look for your imported module, looks something like that:
right click the module you want, and select "Store function as global variable".
now you can call it normally, like "temp1.whatever()"
| Why is imported typescript function not available in Chrome Debugger | I'm trying to debug a method that I have imported into an Angular componenent. However some scope peculiarities around typescript are meaning that I cannot access an imported method via the debugger.
The method is the formatDuration method from date-fns. I want to be able to debug the method directly in the debugger however for some reason the method can't be accessed in the debugger and is always undefined.
import { Component } from '@angular/core'
import { formatDuration } from 'date-fns' // <= method imported
...
export class EntryComponent {
duration:integer = 1000
constructor() { }
get duration():string{
let duration_str = formatDuration({ seconds: this.duration },
{format: ['hours', 'minutes']}
)
// I want to be able to jump in here and use the `formatDuration` method:
debugger // <= debug statement
return duration_str
}
}
The debugger won't recognise the method when I try to call it
When I run the code above and attempt to call the formatDuration method in the console I get an error:
The method IS available to the code that's carrying it
I can't call the method using the debugger however if I remove the debugger statement it is being called successfully. For some reason, it's out of scope of the debugger though ♂️
Copying the method to a local variable makes it available...
get duration():string{
let duration_str = formatDuration({ seconds: this.duration },
{format: ['hours', 'minutes']}
)
// make a copy -------------
let myVersion:any = formatDuration;
// -------------------------
debugger // <= debug statement
return duration_str
}
Running myVersion in the console now returns the function as expected:
Stackblitz demonstration
Here's a stackblitz app that shows the problem. Open your debugger before loading the page and then follow the instructions just before the debug line. The source code for the Stackblitz page is here.
What's happening with scopes such that I can't access the imported method directly?
| [
"You should check the compiled JS file, because this is what the console will target. This generally depends on the target in your tsconfig.json and/or the packaging system as well. Because you are using angular, the packaging is done with webpack. You can find your formatDuration function somewhere here:\n_date-fns__WEBPACK_IMPORTED_MODULE_1__.formatDuration\n\nThe 1 can also be 2 or 0 or 100, you will have to check the closure in the Scope section of the debugger. It's usually in the closure which has as name the typescript file you are debugging. For example:\nClosure (./src/app/components/entry.component.ts)\n\nFor example, here you can see the imports from a certain service in my application:\n\nObviously, if you use --prod, this will all be minified, and will make things a lot harder to trace :)\n",
"A bit easier method of using an imported module, to add on Poul Kruijt's answer:\nCreate a breakpoint in the function that uses that module,\nin the devtools in the breakpoints panel, go to Scope, look for your imported module, looks something like that:\n\nright click the module you want, and select \"Store function as global variable\".\nnow you can call it normally, like \"temp1.whatever()\"\n"
] | [
2,
0
] | [] | [] | [
"angular",
"javascript_debugger",
"typescript"
] | stackoverflow_0063595577_angular_javascript_debugger_typescript.txt |
Q:
Error during initialization of subgraph using Graph protocol
I have a question here regarding The Graph indexing protocol. I am trying to initialize a subgraph but keep getting the error below. My npm version is 9.1.2, yarn version is 3.2.3, node version is 18.12.1, and graph version is 0.36.1.
√ Fetching ABI from Etherscan
√ Contract Name · NftMarketplace
√ Index contract events as entities (Y/n) · true
———
Generate subgraph
Write subgraph to directory
√ Create subgraph scaffold
√ Initialize networks config
√ Initialize subgraph repository
× Failed to install dependencies: Command failed: yarn
C:\Users\User\AppData\Roaming\nvm\v14.17.0\node_modules\@graphprotocol\graph-cli\node_modules\gluegun\build\index.js:13
throw up;
^
Error: Command failed: yarn
at ChildProcess.exithandler (child_process.js:319:12)
at ChildProcess.emit (events.js:376:20)
at maybeClose (internal/child_process.js:1055:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5) {
killed: false,
code: 1,
signal: null,
cmd: 'yarn',
stderr: ''
}
I have tried to downgrade the node version to v12.22.12 but still facing the same issue.
A:
It is because of your NodeJs version. Try to use a NodeJs version below 18.
You can manage it using nvm (Node Version Manager).
I had the same error. It worked for me.
Hope this helps.
| Error during initialization of subgraph using Graph protocol | I have a question here regarding The Graph indexing protocol. I am trying to initialize a subgraph but keep getting the error below. My npm version is 9.1.2, yarn version is 3.2.3, node version is 18.12.1, and graph version is 0.36.1.
√ Fetching ABI from Etherscan
√ Contract Name · NftMarketplace
√ Index contract events as entities (Y/n) · true
———
Generate subgraph
Write subgraph to directory
√ Create subgraph scaffold
√ Initialize networks config
√ Initialize subgraph repository
× Failed to install dependencies: Command failed: yarn
C:\Users\User\AppData\Roaming\nvm\v14.17.0\node_modules\@graphprotocol\graph-cli\node_modules\gluegun\build\index.js:13
throw up;
^
Error: Command failed: yarn
at ChildProcess.exithandler (child_process.js:319:12)
at ChildProcess.emit (events.js:376:20)
at maybeClose (internal/child_process.js:1055:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5) {
killed: false,
code: 1,
signal: null,
cmd: 'yarn',
stderr: ''
}
I have tried to downgrade the node version to v12.22.12 but still facing the same issue.
| [
"It is because of your NodeJs version. Try to use a NodeJs version below 18.\nYou can manage it using nvm (Node Version Manager).\nI had the same error. It worked for me.\nHope this helps.\n"
] | [
0
] | [] | [] | [
"graph",
"npm",
"solidity",
"yarnpkg"
] | stackoverflow_0074478798_graph_npm_solidity_yarnpkg.txt |
Q:
Why are both of these MSSQL indexes useful?
I have a MSSQL database table Events. I am worried that performance could be improved.
EventId LocationId Start End Quantity Price Currency
1 4 2022-08-31 22:00:00.0000000 +02:00 2022-08-31 23:00:00.0000000 +02:00 7.50000 2.0 EUR
2 2 2022-04-04 19:00:00.0000000 +01:00 2022-04-04 20:00:00.0000000 +01:00 1.50000 7.5 EUR
3 2 2022-04-04 19:00:00.0000000 +01:00 2022-04-04 20:00:00.0000000 +01:00 4.00000 8.2 EUR
I already have the following index:
CREATE NONCLUSTERED INDEX [IDX__Events__Location_Start_End] on [Events]
(
[LocationId] asc,
[Start] asc,
[End] asc
)
But Azure suggests that I create this index (medium impact):
CREATE NONCLUSTERED INDEX [IDX__Events__Location_End] ON [dbo].[Events] ([LocationId], [End]) INCLUDE ([Currency], [Price], [Quantity], [Start]) WITH (ONLINE = ON)
Hint: I do a lot of queries where I select Events greater than a start time and less than an end time.
Why is this extra index useful? Should I change my first index instead?
EDIT:
I run this code (EF Core) very often:
var relevantEvents = await _events.Where($@"
[{nameof(Events.LocationId)}] = @locationId
and [{nameof(Events.End)}] > @start
and [{nameof(Events.Start)}] < @end
", args);
Besides that, I upsert to the table often as well.
A:
If you always do queries of the form shown at the end of your question with an equality predicate on LocationId and an inequality predicate on both End and Start then both LocationId, End and LocationId, Start would be viable indexing choices.
Note there is no benefit of adding the third column in as a key column because it will only be able to do a range seek for one or the other of them but the other one should be added as an included column.
My suspicion is that for typical scenarios LocationId, Start will generally involve reading more rows in the range seek than LocationId, End would (as the table accumulates years worth of data Start < @end will still need to read all the old rows from years ago).
The reason your existing index might not be being used and it feels moved to suggest an additional one is due to the INCLUDE ([Currency], [Price], [Quantity]) in the suggested one. If you add these included columns to your existing one you may well see the recommendation go away but you should consider which of LocationId, End and LocationId, Start will typically be able to narrow down the rows better (see "Numbers of rows read" in the execution plan).
CREATE TABLE dbo.Events
(
EventId INT IDENTITY PRIMARY KEY,
LocationId INT,
Start DATETIME2,
[End] DATETIME2,
Quantity DECIMAL(10,6),
Price DECIMAL(10,2),
Currency CHAR(3),
INDEX IDX__Events__Location_Start_End(LocationId, Start, [End]),
INDEX IDX__Events__Location_End(LocationId, [End]) INCLUDE (Start)
)
INSERT INTO dbo.Events
(LocationId, Start,[End], Quantity, Price,Currency)
SELECT LocationId = 1,
Start = DATEADD(SECOND, -Num, GETDATE()),
[End] = DATEADD(SECOND, 60-Num, GETDATE()),
Qty = 7.5,
Price = 2,
Currency = 'EUR'
FROM
(
SELECT TOP 1000000 Num = ROW_NUMBER() OVER (ORDER BY @@SPID)
FROM sys.all_columns c1, sys.all_columns c2
) Nums
DECLARE @Start DATE = GETDATE(), @End DATE = DATEADD(DAY, 1, GETDATE())
SELECT COUNT(*)
FROM dbo.Events WITH (INDEX = IDX__Events__Location_Start_End)
WHERE LocationId = 1 AND [End] > @Start AND Start < @End
OPTION (RECOMPILE)
SELECT COUNT(*)
FROM dbo.Events WITH (INDEX = IDX__Events__Location_End)
WHERE LocationId = 1 AND [End] > @Start AND Start < @End
OPTION (RECOMPILE)
A:
It appears that the index suggested by Azure (IDX__Events__Location_End) would be useful for improving the performance of queries that filter events by location and end time. This index would include the location and end time columns, as well as the currency, price, quantity, and start time columns. This would allow the database engine to quickly look up and return events that match the specified criteria without having to scan the entire table.
As for your existing index (IDX__Events__Location_Start_End), it would be useful for improving the performance of queries that filter events by location, start time, and end time. However, since your code only filters events by location and end time, this index may not be as useful as the one suggested by Azure.
In terms of which index to use, it ultimately depends on the specific queries and workloads that your application uses. If your queries and workloads primarily involve filtering events by location and end time, then it would make sense to use the index suggested by Azure (IDX__Events__Location_End). If, on the other hand, your queries and workloads primarily involve filtering events by location, start time, and end time, then it would make sense to use your existing index (IDX__Events__Location_Start_End).
It is also worth noting that having multiple indexes can improve query performance, but it can also impact the performance of insert, update, and delete operations. Therefore, it is important to carefully consider the trade-offs and decide which indexes are most appropriate for your specific workloads and performance requirements.
| Why are both of these MSSQL indexes useful? | I have a MSSQL database table Events. I am worried that performance could be improved.
EventId LocationId Start End Quantity Price Currency
1 4 2022-08-31 22:00:00.0000000 +02:00 2022-08-31 23:00:00.0000000 +02:00 7.50000 2.0 EUR
2 2 2022-04-04 19:00:00.0000000 +01:00 2022-04-04 20:00:00.0000000 +01:00 1.50000 7.5 EUR
3 2 2022-04-04 19:00:00.0000000 +01:00 2022-04-04 20:00:00.0000000 +01:00 4.00000 8.2 EUR
I already have the following index:
CREATE NONCLUSTERED INDEX [IDX__Events__Location_Start_End] on [Events]
(
[LocationId] asc,
[Start] asc,
[End] asc
)
But Azure suggests that I create this index (medium impact):
CREATE NONCLUSTERED INDEX [IDX__Events__Location_End] ON [dbo].[Events] ([LocationId], [End]) INCLUDE ([Currency], [Price], [Quantity], [Start]) WITH (ONLINE = ON)
Hint: I do a lot of queries where I select Events greater than a start time and less than an end time.
Why is this extra index useful? Should I change my first index instead?
EDIT:
I run this code (EF Core) very often:
var relevantEvents = await _events.Where($@"
[{nameof(Events.LocationId)}] = @locationId
and [{nameof(Events.End)}] > @start
and [{nameof(Events.Start)}] < @end
", args);
Besides that, I upsert to the table often as well.
| [
"If you always do queries of the form shown at the end of your question with an equality predicate on LocationId and an inequality predicate on both End and Start then both LocationId, End and LocationId, Start would be viable indexing choices.\nNote there is no benefit of adding the third column in as a key column because it will only be able to do a range seek for one or the other of them but the other one should be added as an included column.\nMy suspicion is that for typical scenarios LocationId, Start will generally involve reading more rows in the range seek than LocationId, End would (as the table accumulates years worth of data Start < @end will still need to read all the old rows from years ago).\nThe reason your existing index might not be being used and it feels moved to suggest an additional one is due to the INCLUDE ([Currency], [Price], [Quantity]) in the suggested one. If you add these included columns to your existing one you may well see the recommendation go away but you should consider which of LocationId, End and LocationId, Start will typically be able to narrow down the rows better (see \"Numbers of rows read\" in the execution plan).\n\n\nCREATE TABLE dbo.Events\n(\nEventId INT IDENTITY PRIMARY KEY, \nLocationId INT, \nStart DATETIME2,\n[End] DATETIME2,\nQuantity DECIMAL(10,6),\nPrice DECIMAL(10,2),\nCurrency CHAR(3),\nINDEX IDX__Events__Location_Start_End(LocationId, Start, [End]),\nINDEX IDX__Events__Location_End(LocationId, [End]) INCLUDE (Start)\n)\n\n\nINSERT INTO dbo.Events\n(LocationId, Start,[End], Quantity, Price,Currency)\nSELECT LocationId = 1,\n Start = DATEADD(SECOND, -Num, GETDATE()),\n [End] = DATEADD(SECOND, 60-Num, GETDATE()),\n Qty = 7.5,\n Price = 2,\n Currency = 'EUR'\nFROM \n(\nSELECT TOP 1000000 Num = ROW_NUMBER() OVER (ORDER BY @@SPID)\nFROM sys.all_columns c1, sys.all_columns c2\n) Nums\n\n\nDECLARE @Start DATE = GETDATE(), @End DATE = DATEADD(DAY, 1, GETDATE())\n\nSELECT COUNT(*)\nFROM dbo.Events WITH (INDEX = IDX__Events__Location_Start_End)\nWHERE LocationId = 1 AND [End] > @Start AND Start < @End\nOPTION (RECOMPILE)\n\nSELECT COUNT(*)\nFROM dbo.Events WITH (INDEX = IDX__Events__Location_End)\nWHERE LocationId = 1 AND [End] > @Start AND Start < @End\nOPTION (RECOMPILE)\n\n",
"It appears that the index suggested by Azure (IDX__Events__Location_End) would be useful for improving the performance of queries that filter events by location and end time. This index would include the location and end time columns, as well as the currency, price, quantity, and start time columns. This would allow the database engine to quickly look up and return events that match the specified criteria without having to scan the entire table.\nAs for your existing index (IDX__Events__Location_Start_End), it would be useful for improving the performance of queries that filter events by location, start time, and end time. However, since your code only filters events by location and end time, this index may not be as useful as the one suggested by Azure.\nIn terms of which index to use, it ultimately depends on the specific queries and workloads that your application uses. If your queries and workloads primarily involve filtering events by location and end time, then it would make sense to use the index suggested by Azure (IDX__Events__Location_End). If, on the other hand, your queries and workloads primarily involve filtering events by location, start time, and end time, then it would make sense to use your existing index (IDX__Events__Location_Start_End).\nIt is also worth noting that having multiple indexes can improve query performance, but it can also impact the performance of insert, update, and delete operations. Therefore, it is important to carefully consider the trade-offs and decide which indexes are most appropriate for your specific workloads and performance requirements.\n"
] | [
2,
1
] | [] | [] | [
"entity_framework",
"indexing",
"sql",
"sql_server"
] | stackoverflow_0074675467_entity_framework_indexing_sql_sql_server.txt |
Q:
Map one SQL table to two C# entities with one to many relation
I have one SQL table that I need to map to two entities in EF core. The relation should looks like one to many. I need to get a result as if it was normal group by.
The expected result "like group by Model and Ver" should be like
The entities code :
public class ModelVersionControl
{
public string Model { get; set; }
public string Version { get; set; }
public IEnumerable<Item> Items { get; set; }
}
public class Item
{
public string ItemCode { get; set; }
public string UnitId { get; set; }
public string Description { get; set; }
}
The DbContext :
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions options)
: base(options)
{
}
public DbSet<ModelVersionControl> ModelVersions { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Normal dbset has coulmn name configuartions etc ....
modelBuilder.Entity<ModelVersionControl>()
.HasMany<Item>(mr => mr.Items)
.WithOne();
}
}
I can not get this to work :(
Any help ? what I'm missing
Note : I don't want to use group by. I need to have entity "ModelVersionControl" with navigation property to "Items" using EF model
A:
Entities should be
public class ModelVersionControl
{
//primary key for this table
public long Id {get;set;}
public string Model { get; set; }
public string Version { get; set; }
// virtual property for getting related record and mapping
public virtual IEnumerable<Item> Items { get; set; }
}
public class Item
{
// primary key for this table
public long Id {get;set;}
public string ItemCode { get; set; }
public string UnitId { get; set; }
public string Description { get; set; }
// model version control id for reference
public long ModelVersionControlId {get;set;}
// virtual property for related record and mapping
public virtual ModelVersionControl ModelVersionControl {get;set;}
}
In database conext
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions options)
: base(options)
{
}
public DbSet<ModelVersionControl> ModelVersions { get; set; }
public DbSet<Item> Items {get;set;}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
//completed
modelBuilder.Entity<ModelVersionControl>()
.HasMany<Item>(mr => mr.Items)
.WithOne(x=> x.ModelVersionControl)
.HasForeignKey(x=>x.ModelVersionControlId);
}
}
| Map one SQL table to two C# entities with one to many relation | I have one SQL table that I need to map to two entities in EF core. The relation should looks like one to many. I need to get a result as if it was normal group by.
The expected result "like group by Model and Ver" should be like
The entities code :
public class ModelVersionControl
{
public string Model { get; set; }
public string Version { get; set; }
public IEnumerable<Item> Items { get; set; }
}
public class Item
{
public string ItemCode { get; set; }
public string UnitId { get; set; }
public string Description { get; set; }
}
The DbContext :
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions options)
: base(options)
{
}
public DbSet<ModelVersionControl> ModelVersions { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Normal dbset has coulmn name configuartions etc ....
modelBuilder.Entity<ModelVersionControl>()
.HasMany<Item>(mr => mr.Items)
.WithOne();
}
}
I can not get this to work :(
Any help ? what I'm missing
Note : I don't want to use group by. I need to have entity "ModelVersionControl" with navigation property to "Items" using EF model
| [
"Entities should be\npublic class ModelVersionControl\n{\n //primary key for this table\n public long Id {get;set;}\n public string Model { get; set; }\n public string Version { get; set; }\n // virtual property for getting related record and mapping\n public virtual IEnumerable<Item> Items { get; set; }\n}\n\npublic class Item\n{\n // primary key for this table\n public long Id {get;set;}\n public string ItemCode { get; set; }\n public string UnitId { get; set; }\n public string Description { get; set; }\n // model version control id for reference\n public long ModelVersionControlId {get;set;}\n // virtual property for related record and mapping\n public virtual ModelVersionControl ModelVersionControl {get;set;}\n\n}\n\nIn database conext\npublic class MyDbContext : DbContext\n{\n public MyDbContext(DbContextOptions options)\n : base(options)\n {\n\n }\n public DbSet<ModelVersionControl> ModelVersions { get; set; }\n public DbSet<Item> Items {get;set;}\n \n protected override void OnModelCreating(ModelBuilder modelBuilder)\n {\n //completed\n modelBuilder.Entity<ModelVersionControl>()\n .HasMany<Item>(mr => mr.Items)\n .WithOne(x=> x.ModelVersionControl) \n .HasForeignKey(x=>x.ModelVersionControlId);\n }\n}\n\n"
] | [
0
] | [] | [] | [
"c#",
"entity_framework_core",
"sql"
] | stackoverflow_0074675086_c#_entity_framework_core_sql.txt |
Q:
Intersection point of n singly linked list
I had given a technical interview in which I have been asked to find intersection point of n link list.
I could come up to find intersection point of 2 link list but couldn't extend it.
Could someone help me reach the algorithm
I tried to call the function to find integration point of 2 link list for every pair, but that didn't work.
A:
To find the intersection point of n linked lists, you can use the following steps:
Create a map that maps each node in the linked lists to the number
of linked lists it appears in.
Iterate through the linked lists and
for each node, increment the count in the map for that node.
Iterate through the map and return the first node that appears in all n
linked lists.
Here is an example of how this algorithm might be implemented in C++:
#include <unordered_map>
struct ListNode
{
int val;
ListNode* next;
ListNode(int x) : val(x), next(nullptr) {}
};
ListNode* intersection(const vector<ListNode*>& lists)
{
// Create a map that maps each node in the linked lists to the number of linked lists it appears in
unordered_map<ListNode*, int> map;
// Iterate through the linked lists and for each node, increment the count in the map for that node
for (const auto& list : lists)
{
for (auto p = list; p; p = p->next)
{
map[p]++;
}
}
// Iterate through the map and return the first node that appears in all n linked lists
for (const auto& [node, count] : map)
{
if (count == lists.size())
{
return node;
}
}
// No intersection point was found
return nullptr;
}
In this example, the intersection function takes a vector of linked lists as an argument and returns the intersection point of the linked lists (if one exists). It first creates a map that maps each node in the linked lists to the number of linked lists it appears in. Then, it iterates through the linked lists and for each node, increments the count in the map for that node.
Finally, the function iterates through the map and returns the first node that appears in all n linked lists. If no such node is found, the function returns nullptr.
A:
For finding intersection point of 2 link list:
Get the count of both the link list, suppose the bigger list is k nodes greater than second.
Traverse k nodes in the bigger link list, then then start comparing nodes of both the link list.
If they match, then you have found the intersection point.
I had come up with this method to find the intersection of 2 link list, but I am somehow not able to extend it for 'n'.
Could we extend this to find intersection of n link list, or do we have to use map based approach only.
| Intersection point of n singly linked list | I had given a technical interview in which I have been asked to find intersection point of n link list.
I could come up to find intersection point of 2 link list but couldn't extend it.
Could someone help me reach the algorithm
I tried to call the function to find integration point of 2 link list for every pair, but that didn't work.
| [
"To find the intersection point of n linked lists, you can use the following steps:\n\nCreate a map that maps each node in the linked lists to the number\nof linked lists it appears in.\nIterate through the linked lists and\nfor each node, increment the count in the map for that node.\nIterate through the map and return the first node that appears in all n\nlinked lists.\n\nHere is an example of how this algorithm might be implemented in C++:\n#include <unordered_map>\n\nstruct ListNode\n{\n int val;\n ListNode* next;\n ListNode(int x) : val(x), next(nullptr) {}\n};\n\nListNode* intersection(const vector<ListNode*>& lists)\n{\n // Create a map that maps each node in the linked lists to the number of linked lists it appears in\n unordered_map<ListNode*, int> map;\n\n // Iterate through the linked lists and for each node, increment the count in the map for that node\n for (const auto& list : lists)\n {\n for (auto p = list; p; p = p->next)\n {\n map[p]++;\n }\n }\n\n // Iterate through the map and return the first node that appears in all n linked lists\n for (const auto& [node, count] : map)\n {\n if (count == lists.size())\n {\n return node;\n }\n }\n\n // No intersection point was found\n return nullptr;\n}\n\nIn this example, the intersection function takes a vector of linked lists as an argument and returns the intersection point of the linked lists (if one exists). It first creates a map that maps each node in the linked lists to the number of linked lists it appears in. Then, it iterates through the linked lists and for each node, increments the count in the map for that node.\nFinally, the function iterates through the map and returns the first node that appears in all n linked lists. If no such node is found, the function returns nullptr.\n",
"For finding intersection point of 2 link list:\n\nGet the count of both the link list, suppose the bigger list is k nodes greater than second.\nTraverse k nodes in the bigger link list, then then start comparing nodes of both the link list.\nIf they match, then you have found the intersection point.\n\nI had come up with this method to find the intersection of 2 link list, but I am somehow not able to extend it for 'n'.\nCould we extend this to find intersection of n link list, or do we have to use map based approach only.\n"
] | [
0,
0
] | [] | [] | [
"algorithm",
"intersection",
"linked_list",
"singly_linked_list"
] | stackoverflow_0074668627_algorithm_intersection_linked_list_singly_linked_list.txt |
Q:
error using pyinstaller exe when python ttp module is in place
I am trying convert my .py file to an exe file using pyinstaller. The .py file perfectly work fine, however, I am facing an issue after the program is converted to .exe file. The problem is shared right below. ttp.lazy_import_functions: failed to save problem with File not found indication.
[![enter image description here][1]][1]
I did a search in google if any similar error, it looks there is only one similar discussion in github which is not the %100 same problem. Because I am facing an issue when using .exe file. See https://github.com/dmulyalin/ttp/issues/54
However, I have checked ttp/ttp.py file, I can see following lazy_import_functions with the path_to_cache.
```log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")```
As it is also shown above picture, it looks that .exe file trying to find ttp/ttp.py file under _MEIXXXX cache file.
I have actually created a the following patch with the following changes in my ttp.py file to make .exe file work, however I have a few concerns here if someone explain it, I appricated it.
Changes in my ttp.py:
print(path_to_python_3x)
if path_to_python_3x:
os.startfile(f"{path_to_python_3x}\\patch.py")
def lazy_import_functions():
"""function to collect a list of all files/directories within ttp module,
parse .py files using ast and extract information about all functions
to cache them within _ttp_ dictionary
"""
_ttp_ = {
"macro": {},
"python_major_version": version_info.major,
"global_vars": {},
"template_obj": {},
"vars": {},
}
log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_temp_file = tempfile.gettempdir()
_MEI_regex = "_MEI.*"
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
path_to_temp_ttp = f"{path_to_temp_mei}" + "\\ttp"
path_to_cache = os.getenv("TTPCACHEFOLDER", path_to_temp_ttp)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
else:
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
#print(path_to_cache)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
With this patch file I am copying ttp/ folder includes ttp.py into _IMEXXXX cache file, so that .exe file finds the path, and worked fine, thankfully.
import os
import sys
import tempfile
import shutil
import re
path_to_python_3x = os.path.dirname(sys.executable)
# print(path_to_python_3x)
# print(os.getcwd())
path_to_site_packages = path_to_python_3x + "\\Lib\\site-packages"
#print(path_to_site_packages)
path_to_site_ttp = path_to_site_packages +"\\ttp"
#print(path_to_site_ttp)
_MEI_regex = "_MEI.*"
_MEI_regex_a_list = []
while True:
path_to_temp_file = tempfile.gettempdir()
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
_MEI_regex_a_list.append(path_to_temp_mei)
path_to_temp_ttp = os.path.join(path_to_temp_mei, "ttp")
try:
if "ttp" not in os.listdir(path_to_temp_mei):
shutil.copytree(path_to_site_ttp, path_to_temp_ttp)
except Exception as e:
print(e)```
My queires here is that:
1. Why the program does not work when installing with pyinstaller?
2. Why it checks /ttp/ttp.py file under under Temp?
3. Any way to change cache directory when converting with pyinstaller?
4. As you can see, I have a workaround for now. However, I won't work if cache file started to be kept other than Temp/_IMEXXXX. Because my regex string chooses the files startswidth _IME. Not sure if any possiblity here as well.
Thanks in advance!
[1]: https://i.stack.imgur.com/n0H3j.png
A:
From what i see, the ttp module tries to access its files and has references to the installation path for ttp which it cannot get to using the os module after its bundled by pyinstaller.
One simpler workaround than changing the module files and applying the patch file that you did, would be to just copy the installation folders of the ttp module to the bundle output folder using pyinstaller itself or do it manually.
This way it would find all the ttp module files which it was not after the bundle process using pyinstaller.
I was facing the same issue and it fixed it for me.
| error using pyinstaller exe when python ttp module is in place | I am trying convert my .py file to an exe file using pyinstaller. The .py file perfectly work fine, however, I am facing an issue after the program is converted to .exe file. The problem is shared right below. ttp.lazy_import_functions: failed to save problem with File not found indication.
[![enter image description here][1]][1]
I did a search in google if any similar error, it looks there is only one similar discussion in github which is not the %100 same problem. Because I am facing an issue when using .exe file. See https://github.com/dmulyalin/ttp/issues/54
However, I have checked ttp/ttp.py file, I can see following lazy_import_functions with the path_to_cache.
```log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")```
As it is also shown above picture, it looks that .exe file trying to find ttp/ttp.py file under _MEIXXXX cache file.
I have actually created a the following patch with the following changes in my ttp.py file to make .exe file work, however I have a few concerns here if someone explain it, I appricated it.
Changes in my ttp.py:
print(path_to_python_3x)
if path_to_python_3x:
os.startfile(f"{path_to_python_3x}\\patch.py")
def lazy_import_functions():
"""function to collect a list of all files/directories within ttp module,
parse .py files using ast and extract information about all functions
to cache them within _ttp_ dictionary
"""
_ttp_ = {
"macro": {},
"python_major_version": version_info.major,
"global_vars": {},
"template_obj": {},
"vars": {},
}
log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_temp_file = tempfile.gettempdir()
_MEI_regex = "_MEI.*"
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
path_to_temp_ttp = f"{path_to_temp_mei}" + "\\ttp"
path_to_cache = os.getenv("TTPCACHEFOLDER", path_to_temp_ttp)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
else:
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
#print(path_to_cache)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
With this patch file I am copying ttp/ folder includes ttp.py into _IMEXXXX cache file, so that .exe file finds the path, and worked fine, thankfully.
import os
import sys
import tempfile
import shutil
import re
path_to_python_3x = os.path.dirname(sys.executable)
# print(path_to_python_3x)
# print(os.getcwd())
path_to_site_packages = path_to_python_3x + "\\Lib\\site-packages"
#print(path_to_site_packages)
path_to_site_ttp = path_to_site_packages +"\\ttp"
#print(path_to_site_ttp)
_MEI_regex = "_MEI.*"
_MEI_regex_a_list = []
while True:
path_to_temp_file = tempfile.gettempdir()
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
_MEI_regex_a_list.append(path_to_temp_mei)
path_to_temp_ttp = os.path.join(path_to_temp_mei, "ttp")
try:
if "ttp" not in os.listdir(path_to_temp_mei):
shutil.copytree(path_to_site_ttp, path_to_temp_ttp)
except Exception as e:
print(e)```
My queires here is that:
1. Why the program does not work when installing with pyinstaller?
2. Why it checks /ttp/ttp.py file under under Temp?
3. Any way to change cache directory when converting with pyinstaller?
4. As you can see, I have a workaround for now. However, I won't work if cache file started to be kept other than Temp/_IMEXXXX. Because my regex string chooses the files startswidth _IME. Not sure if any possiblity here as well.
Thanks in advance!
[1]: https://i.stack.imgur.com/n0H3j.png
| [
"From what i see, the ttp module tries to access its files and has references to the installation path for ttp which it cannot get to using the os module after its bundled by pyinstaller.\nOne simpler workaround than changing the module files and applying the patch file that you did, would be to just copy the installation folders of the ttp module to the bundle output folder using pyinstaller itself or do it manually.\nThis way it would find all the ttp module files which it was not after the bundle process using pyinstaller.\nI was facing the same issue and it fixed it for me.\n"
] | [
0
] | [] | [] | [
"exe",
"pyinstaller",
"python",
"python_3.x"
] | stackoverflow_0074173221_exe_pyinstaller_python_python_3.x.txt |
Q:
Java Projection for nested objects using Spring Data JPA?
I have the following projection class and I want to retrieve data by joining Recipe and Ingredient tables from db using @Query in Spring data JPA:
public interface RecipeProjection {
Long getId();
String getTitle();
List<Ingredient> getIngredients();
}
However, I cannot map the ingredients to the projection. Here is my query in the repository:
@Query(value = "SELECT r.id AS id, r.title, i.name AS ingredientName " +
"FROM Recipe r " +
"LEFT JOIN RecipeIngredient ri ON r.id = ri.recipeId " +
"LEFT JOIN Ingredient i ON ri.ingredientId = i.id "
)
List<RecipeSearchProjection> getData();
I am not sure if using a proper alias for ingredient table can solve the problem, but even I tried, I cannot retrieve its data. So, is it possible to get nested data via Java Projection?
A:
I suggest using query methods where queries are derived from the method name directly without writing them manually. When interface-based projections are used, the names of their methods have to be identical to the getter methods defined in the entity class.
Try to define your method as:
List<RecipeSearchProjection> findAllBy();
However, projections can also be used with @Query annotation. For more details on the different ways to use JPA query projections, check out the blog post.
| Java Projection for nested objects using Spring Data JPA? | I have the following projection class and I want to retrieve data by joining Recipe and Ingredient tables from db using @Query in Spring data JPA:
public interface RecipeProjection {
Long getId();
String getTitle();
List<Ingredient> getIngredients();
}
However, I cannot map the ingredients to the projection. Here is my query in the repository:
@Query(value = "SELECT r.id AS id, r.title, i.name AS ingredientName " +
"FROM Recipe r " +
"LEFT JOIN RecipeIngredient ri ON r.id = ri.recipeId " +
"LEFT JOIN Ingredient i ON ri.ingredientId = i.id "
)
List<RecipeSearchProjection> getData();
I am not sure if using a proper alias for ingredient table can solve the problem, but even I tried, I cannot retrieve its data. So, is it possible to get nested data via Java Projection?
| [
"I suggest using query methods where queries are derived from the method name directly without writing them manually. When interface-based projections are used, the names of their methods have to be identical to the getter methods defined in the entity class.\nTry to define your method as:\nList<RecipeSearchProjection> findAllBy();\n\nHowever, projections can also be used with @Query annotation. For more details on the different ways to use JPA query projections, check out the blog post.\n"
] | [
0
] | [] | [] | [
"java",
"projection",
"spring",
"spring_boot",
"spring_data_jpa"
] | stackoverflow_0074675588_java_projection_spring_spring_boot_spring_data_jpa.txt |
Q:
Pandas resample drops (static) datetime column, how do I keep it?
I'm working with a pandas Multiindex that is given by the three keys:
[Verbundzuordnung, ProjektIndex, Datum],
I would like to resample the dataframe on Datum hourly, which drops the right colum TagDesAbdichtens, I would like to keep it as it's static.
Verbundzuordnung ProjektIndex Datum TagDesAbdichtens
1 81679 2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
... ... ... ...
2 94574 2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
285192 rows × 1 columns
There are aditional columns that I left out here for easier comprehension.
I am currently applying this to resample the dataframe
all_merged = all_merged.groupby([
pd.Grouper(level='Verbundzuordnung'),
pd.Grouper(level='ProjektIndex'),
pd.Grouper(level='Datum', freq='H')]
)
all_merged.mean() gives me the wanted output with TagDesAbdichtens missing.
This value ist for each Verbundzuordnung and ProjektIndex unique and static and I would like to have it back in the resampled version.
Is there a way to do it with native pandas functions?
A:
I've had success resampling using the native resample function. For example,
resample_dict = {
'Verbundzuordnung': 'mean',
'ProjektIndex': 'mean',
'TagDesAbdichtens': 'first'
}
data = data.resample("60T", closed='left', label='left').apply(resample_dict)
You can apply whichever grouping keys (in place of mean) to your columns (e.g. first, min, max, etc).
See https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html for more.
A:
Instead of mean() you can do the following
agg({'TagDesAbdichtens': 'first', 'another_col': 'mean', 'another_col2': 'mean', ... })
That is, you can specify a different aggregate function for each column.
| Pandas resample drops (static) datetime column, how do I keep it? | I'm working with a pandas Multiindex that is given by the three keys:
[Verbundzuordnung, ProjektIndex, Datum],
I would like to resample the dataframe on Datum hourly, which drops the right colum TagDesAbdichtens, I would like to keep it as it's static.
Verbundzuordnung ProjektIndex Datum TagDesAbdichtens
1 81679 2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
... ... ... ...
2 94574 2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
285192 rows × 1 columns
There are aditional columns that I left out here for easier comprehension.
I am currently applying this to resample the dataframe
all_merged = all_merged.groupby([
pd.Grouper(level='Verbundzuordnung'),
pd.Grouper(level='ProjektIndex'),
pd.Grouper(level='Datum', freq='H')]
)
all_merged.mean() gives me the wanted output with TagDesAbdichtens missing.
This value ist for each Verbundzuordnung and ProjektIndex unique and static and I would like to have it back in the resampled version.
Is there a way to do it with native pandas functions?
| [
"I've had success resampling using the native resample function. For example,\n resample_dict = { \n 'Verbundzuordnung': 'mean', \n 'ProjektIndex': 'mean',\n 'TagDesAbdichtens': 'first'\n }\n\n data = data.resample(\"60T\", closed='left', label='left').apply(resample_dict)\n\nYou can apply whichever grouping keys (in place of mean) to your columns (e.g. first, min, max, etc).\nSee https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html for more.\n",
"Instead of mean() you can do the following\nagg({'TagDesAbdichtens': 'first', 'another_col': 'mean', 'another_col2': 'mean', ... })\n\nThat is, you can specify a different aggregate function for each column.\n"
] | [
0,
0
] | [] | [] | [
"datetime",
"group_by",
"pandas",
"pandas_resample",
"python"
] | stackoverflow_0074675902_datetime_group_by_pandas_pandas_resample_python.txt |
Q:
Google Sheets: Query and list the last 5 values in a column if the column contains a number
I want to use Sparkline for a spreadsheet to show a trend of the last 5 soccer matches, where A and B are the goals, and C are the resulting points.
In column C, the points are only generated if values are entered for the goals and goals conceded, i.e. the columns are not empty.
A (Goals)
B (Conceded)
C (Points)
4
4
1
4
4
1
4
4
0
3
4
4
1
0
4
0
As you see, in row 3, column c is empty.
What I basically try to achieve, is to create a list where the last 5 entries which are not empty / null, are listed:
C (Points)
1
1
3
1
0
Is used this formula, but it somehow does not work
=query(J15:J114,"select * offset "&count(J15:J114)-5)
shorturl.at/gHPY9 (example result picture)
Tried to find a solution myself, but am stuck.
Best,
Feal
A:
Use query() with a where clause, like this:
=query(
J15:J114,
"where J is not null
offset " & max(0, count(J15:J114) - 5),
0
)
A:
Here is an example of how you could use the QUERY function in Google Sheets to create a list of the last 5 non-empty entries in a column:
=QUERY(J15:J114, "SELECT * WHERE J15:J114 IS NOT NULL ORDER BY ROW() DESC LIMIT 5", 0)
In this formula, the QUERY function takes three arguments:
The range of cells that you want to query (J15:J114 in this example)
The query that you want to run on the range of cells (SELECT * WHERE J15:J114 IS NOT NULL ORDER BY ROW() DESC LIMIT 5 in this example)
A flag indicating whether or not the first row of the range should be treated as column labels (0 in this example, which means that the first row will not be treated as column labels)
The query that we use in this formula filters the range of cells to only include rows where the value in the J column is not NULL. It then sorts the rows in descending order based on their row number, and limits the result to the last 5 rows. This will give us a list of the last 5 non-empty entries in the J column.
I hope this helps! Let me know if you have any questions.
| Google Sheets: Query and list the last 5 values in a column if the column contains a number | I want to use Sparkline for a spreadsheet to show a trend of the last 5 soccer matches, where A and B are the goals, and C are the resulting points.
In column C, the points are only generated if values are entered for the goals and goals conceded, i.e. the columns are not empty.
A (Goals)
B (Conceded)
C (Points)
4
4
1
4
4
1
4
4
0
3
4
4
1
0
4
0
As you see, in row 3, column c is empty.
What I basically try to achieve, is to create a list where the last 5 entries which are not empty / null, are listed:
C (Points)
1
1
3
1
0
Is used this formula, but it somehow does not work
=query(J15:J114,"select * offset "&count(J15:J114)-5)
shorturl.at/gHPY9 (example result picture)
Tried to find a solution myself, but am stuck.
Best,
Feal
| [
"Use query() with a where clause, like this:\n=query( \n J15:J114, \n \"where J is not null \n offset \" & max(0, count(J15:J114) - 5), \n 0 \n)\n\n",
"Here is an example of how you could use the QUERY function in Google Sheets to create a list of the last 5 non-empty entries in a column:\n=QUERY(J15:J114, \"SELECT * WHERE J15:J114 IS NOT NULL ORDER BY ROW() DESC LIMIT 5\", 0)\n\nIn this formula, the QUERY function takes three arguments:\n\nThe range of cells that you want to query (J15:J114 in this example)\nThe query that you want to run on the range of cells (SELECT * WHERE J15:J114 IS NOT NULL ORDER BY ROW() DESC LIMIT 5 in this example)\nA flag indicating whether or not the first row of the range should be treated as column labels (0 in this example, which means that the first row will not be treated as column labels)\n\nThe query that we use in this formula filters the range of cells to only include rows where the value in the J column is not NULL. It then sorts the rows in descending order based on their row number, and limits the result to the last 5 rows. This will give us a list of the last 5 non-empty entries in the J column.\nI hope this helps! Let me know if you have any questions.\n"
] | [
1,
0
] | [] | [] | [
"google_sheets",
"google_sheets_formula"
] | stackoverflow_0074677036_google_sheets_google_sheets_formula.txt |
Q:
position a fixed/absolute pseudoelement inside relative div (which have a scroll) always at the bottom
❌ this isn't what I want, since it didn't stick to the end, and didn't stay fixed.
(the pseudoelement is a gradient of white and transparent)
✅ this is what I want, and is correct only at the start when I never scrolled :(
(the pseudoelement is a gradient of white and transparent)
I tried absolute, fixed, and also sticky... nothing helps but absolute is correct only at the start when we didn't scroll, but when scrolling the ::before will start to follow the scrollbar which isn't what I want... I want that the ::before staying at the bottom every time (fixed doesn't help because it will go outside the <div> and follow maybe the <body>)
I can't change the structure, since I need to use ::before or ::after, but not create a new div, but I can use whatever CSS you suggest to me
I thought is a simple problem since fixed can solve it, but turn out is very hard, and fixed never work :( (I tried also to see on StackOverflow like 10 questions, and nothing helps)
before downvoting, if you wanted to do it, please see at least the code and try the demo,
the purpose is to show a little gradient at the end of the div...
Demo:
just try the demo below to see the bug:
div {
position: relative;
width: 300px;
height: 70vh;
padding: 1rem;
border: 2px solid red;
overflow: auto;
border-radius: 1rem;
}
div::before {
content: "";
position: absolute;
/* I tried also with fixed, but it goes outside the div :( */
bottom: 0;
left: 0;
width: 100%;
height: 40%;
background-image: linear-gradient(to top, white, transparent);
/*outline: 5px solid blue;*/ /* if you can't see the gradient, uncommnet this line */
}
<div>
Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur
magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia,
soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis
architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti
animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni
officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe
ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et
inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure
dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum
dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni
sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta
sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto
ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi
repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro!
</div>
A:
You can try out this approach to add a shadow for your paragraph.
You can make your wrapper container relative to your inner content to keep your shadow fixed at the bottom of the box.
Here is the sample code.
<div class="content">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur
magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia,
soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis
architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti
animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni
officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe
ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et
inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure
dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum
dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni
sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta
sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto
ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi
repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro!
<div class="fixed-wrapper">
<div class="fixed-shadow" />
</div>
</div>
.content {
position: relative;
width: 300px;
height: 70vh;
padding: 1rem;
border: 2px solid red;
overflow: auto;
border-radius: 1rem;
margin: 0 auto;
}
.fixed-wrapper {
position: absolute;
bottom: 0;
width: 300px;
height: 100px;
}
.fixed-shadow {
position: fixed;
width: 300px;
height: 100px;
background-image: linear-gradient(to top, white, transparent);
}
| position a fixed/absolute pseudoelement inside relative div (which have a scroll) always at the bottom | ❌ this isn't what I want, since it didn't stick to the end, and didn't stay fixed.
(the pseudoelement is a gradient of white and transparent)
✅ this is what I want, and is correct only at the start when I never scrolled :(
(the pseudoelement is a gradient of white and transparent)
I tried absolute, fixed, and also sticky... nothing helps but absolute is correct only at the start when we didn't scroll, but when scrolling the ::before will start to follow the scrollbar which isn't what I want... I want that the ::before staying at the bottom every time (fixed doesn't help because it will go outside the <div> and follow maybe the <body>)
I can't change the structure, since I need to use ::before or ::after, but not create a new div, but I can use whatever CSS you suggest to me
I thought is a simple problem since fixed can solve it, but turn out is very hard, and fixed never work :( (I tried also to see on StackOverflow like 10 questions, and nothing helps)
before downvoting, if you wanted to do it, please see at least the code and try the demo,
the purpose is to show a little gradient at the end of the div...
Demo:
just try the demo below to see the bug:
div {
position: relative;
width: 300px;
height: 70vh;
padding: 1rem;
border: 2px solid red;
overflow: auto;
border-radius: 1rem;
}
div::before {
content: "";
position: absolute;
/* I tried also with fixed, but it goes outside the div :( */
bottom: 0;
left: 0;
width: 100%;
height: 40%;
background-image: linear-gradient(to top, white, transparent);
/*outline: 5px solid blue;*/ /* if you can't see the gradient, uncommnet this line */
}
<div>
Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur
magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia,
soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis
architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti
animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni
officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe
ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et
inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure
dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum
dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni
sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta
sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto
ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi
repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro!
</div>
| [
"You can try out this approach to add a shadow for your paragraph.\nYou can make your wrapper container relative to your inner content to keep your shadow fixed at the bottom of the box.\nHere is the sample code.\n<div class=\"content\">\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur\n magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia,\n soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis\n architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti\n animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni\n officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe\n ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et\n inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure\n dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro! Lorem ipsum\n dolor sit amet consectetur adipisicing elit. Non sint praesentium quod, perferendis nesciunt, cumque exercitationem magni officiis aliquid, a dolor omnis dolore placeat fugiat consequatur. Fugit repudiandae porro id! Error voluptas, aspernatur magni\n sapiente ducimus illo maiores repellat voluptatibus consectetur aut atque unde eveniet excepturi nam, ipsum at. Labore saepe ab ex beatae libero autem. A expedita corporis placeat? Eos eum nobis excepturi delectus enim autem, vitae ea mollitia, soluta\n sequi nihil, officiis iusto quos sunt omnis perspiciatis consequuntur aliquam impedit voluptates. Odit, aperiam facere et inventore dicta distinctio. Corrupti consectetur, tempore nihil voluptates minus odio, vitae facilis eveniet quos officiis architecto\n ipsum libero cupiditate aut. Sapiente doloribus aperiam libero laborum, accusamus ipsum et est tenetur voluptate iure dolorum. Sit quae dolor maxime assumenda quam, fuga id soluta aliquam reprehenderit vitae. Consequuntur ducimus, deleniti animi excepturi\n repudiandae pariatur assumenda neque? Impedit tempore neque odio a, excepturi maiores rerum porro!\n <div class=\"fixed-wrapper\">\n <div class=\"fixed-shadow\" />\n </div>\n</div>\n\n.content {\n position: relative;\n width: 300px;\n height: 70vh;\n padding: 1rem;\n border: 2px solid red;\n overflow: auto;\n border-radius: 1rem;\n margin: 0 auto;\n}\n\n.fixed-wrapper {\n position: absolute;\n bottom: 0;\n width: 300px;\n height: 100px;\n}\n\n.fixed-shadow {\n position: fixed;\n width: 300px;\n height: 100px;\n background-image: linear-gradient(to top, white, transparent);\n}\n\n"
] | [
0
] | [] | [] | [
"css",
"css_position",
"html",
"pseudo_element"
] | stackoverflow_0074675782_css_css_position_html_pseudo_element.txt |
Q:
How can I display the new value made by input for the next screen in Kivy
I have been trying to make this code work. Im using ScreenManager to manage my screen.
I want the Input I entered on the first screen to be displayed the next screen. But instead, it just shows the initial value, and it doesn't change to the Inputted value.
Here is the Code i have done
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.properties import ObjectProperty
from kivy.clock import Clock
Builder.load_string("""
<MenuScreen>:
promptObject: prompts
BoxLayout:
orientation: 'horizontal'
TextInput:
id: prompts
pos: 20,20
Button:
text: "Enter Prompt"
pos: 30,30
size: 100, 30
on_press: root.submit()
<Newscreen>
BoxLayout:
orientation: 'vertical'
TextInput:
id: display_output
text: root.output
readonly: True
""")
class MenuScreen(Screen):
promptObject = ObjectProperty()
prompt = ''
def submit(self):
prompt = self.promptObject.text
global result
result = prompt
sm.add_widget(NewScreen(name="Settings"))
sm.switch_to(sm.get_screen("Settings"))
NewScreen.display(self)
class NewScreen(Screen):
output = "testing testing"
def display(self):
self.output = result
print(result) #To test if it works
class TestApp(App):
def build(self):
global sm
sm = ScreenManager()
sm.add_widget(MenuScreen(name='menu'))
return sm
if __name__ == '__main__':
TestApp().run()
I'm also thinking if i can instead Declare the Layout for the second screen later right before i call for the next Screen. Maybe that could work but if you have other method, it would be nice to see it.
A:
thank you for the concise single-file example. this is a very helpful way to submit a kivy question. I have modified and tested the below app with various changes.
I changed the root.submit to app.submit. This is not strictly required, it is just a choice in this example to put the logic in the main app. it is also possible to use root.submit and put the logic in the widget but one would have to pass a reference to the screen manager into that widget in that case.
imported TextInput object instead of using ObjectProperty. when using an IDE it is helpful to declare objects with the specific type because it enables auto-complete
assigned the ScreenManager to self.sm so this object is available throughout the app.
finally, got rid of any reference to global. I think it is better to avoid use of this keyword and explicitly create the variable at the highest level where you need it and pass the value into the objects requiring it.
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
# from kivy.properties import ObjectProperty
from kivy.uix.textinput import TextInput
from kivy.properties import StringProperty
from kivy.clock import Clock
Builder.load_string("""
<MenuScreen>:
promptObject: prompts
BoxLayout:
orientation: 'horizontal'
TextInput:
id: prompts
pos: 20,20
Button:
text: "Enter Prompt"
pos: 30,30
size: 100, 30
on_press: app.submit()
<Newscreen>
BoxLayout:
orientation: 'vertical'
TextInput:
id: display_output
text: root.output
readonly: True
""")
class MenuScreen(Screen):
promptObject = TextInput()
class NewScreen(Screen):
output = StringProperty()
def __init__(self, **kw):
super().__init__(**kw)
def display(self, result):
# set the string property equal to the value you sent
self.output = result
print(result) # To test if it works
class TestApp(App):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# create screen manager with self so that you have access
# anywhere inside the App
self.sm = ScreenManager()
# create the main screen
self.menu_screen = MenuScreen(name='menu')
# this could be deferred, or created at initialization
self.settings_screen = NewScreen(name='Settings')
#
def submit(self):
prompt = self.menu_screen.promptObject.text
result = prompt
# optional, deferred creation
# self.settings_screen = NewScreen(name='Settings')
# add to the screen manager
self.sm.add_widget(self.settings_screen)
# enter the value into your other screen
self.settings_screen.display(result)
# switch to this screen
self.sm.current="Settings"
def build(self) -> ScreenManager:
# could create this screen right away, depending...
# self.sm.add_widget(self.settings_screen)
# of course you need the main screen
self.sm.add_widget(self.menu_screen)
# redundant, unless you create all screens at the beginning
self.sm.current = 'menu'
return self.sm
if __name__ == '__main__':
TestApp().run()
| How can I display the new value made by input for the next screen in Kivy | I have been trying to make this code work. Im using ScreenManager to manage my screen.
I want the Input I entered on the first screen to be displayed the next screen. But instead, it just shows the initial value, and it doesn't change to the Inputted value.
Here is the Code i have done
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.properties import ObjectProperty
from kivy.clock import Clock
Builder.load_string("""
<MenuScreen>:
promptObject: prompts
BoxLayout:
orientation: 'horizontal'
TextInput:
id: prompts
pos: 20,20
Button:
text: "Enter Prompt"
pos: 30,30
size: 100, 30
on_press: root.submit()
<Newscreen>
BoxLayout:
orientation: 'vertical'
TextInput:
id: display_output
text: root.output
readonly: True
""")
class MenuScreen(Screen):
promptObject = ObjectProperty()
prompt = ''
def submit(self):
prompt = self.promptObject.text
global result
result = prompt
sm.add_widget(NewScreen(name="Settings"))
sm.switch_to(sm.get_screen("Settings"))
NewScreen.display(self)
class NewScreen(Screen):
output = "testing testing"
def display(self):
self.output = result
print(result) #To test if it works
class TestApp(App):
def build(self):
global sm
sm = ScreenManager()
sm.add_widget(MenuScreen(name='menu'))
return sm
if __name__ == '__main__':
TestApp().run()
I'm also thinking if i can instead Declare the Layout for the second screen later right before i call for the next Screen. Maybe that could work but if you have other method, it would be nice to see it.
| [
"thank you for the concise single-file example. this is a very helpful way to submit a kivy question. I have modified and tested the below app with various changes.\nI changed the root.submit to app.submit. This is not strictly required, it is just a choice in this example to put the logic in the main app. it is also possible to use root.submit and put the logic in the widget but one would have to pass a reference to the screen manager into that widget in that case.\nimported TextInput object instead of using ObjectProperty. when using an IDE it is helpful to declare objects with the specific type because it enables auto-complete\nassigned the ScreenManager to self.sm so this object is available throughout the app.\nfinally, got rid of any reference to global. I think it is better to avoid use of this keyword and explicitly create the variable at the highest level where you need it and pass the value into the objects requiring it.\nfrom kivy.app import App\nfrom kivy.lang import Builder\nfrom kivy.uix.screenmanager import ScreenManager, Screen\n# from kivy.properties import ObjectProperty\nfrom kivy.uix.textinput import TextInput\nfrom kivy.properties import StringProperty\nfrom kivy.clock import Clock\n\nBuilder.load_string(\"\"\"\n<MenuScreen>:\n promptObject: prompts\n\n BoxLayout:\n orientation: 'horizontal'\n TextInput:\n id: prompts\n pos: 20,20\n Button:\n text: \"Enter Prompt\"\n pos: 30,30\n size: 100, 30\n on_press: app.submit()\n\n\n<Newscreen>\n BoxLayout:\n orientation: 'vertical'\n TextInput:\n id: display_output\n text: root.output\n readonly: True\n\n\"\"\")\n\n\nclass MenuScreen(Screen):\n promptObject = TextInput()\n\n\nclass NewScreen(Screen):\n output = StringProperty()\n\n def __init__(self, **kw):\n super().__init__(**kw)\n\n def display(self, result):\n # set the string property equal to the value you sent\n self.output = result\n print(result) # To test if it works\n\n\nclass TestApp(App):\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n # create screen manager with self so that you have access\n # anywhere inside the App\n self.sm = ScreenManager()\n # create the main screen\n self.menu_screen = MenuScreen(name='menu')\n\n # this could be deferred, or created at initialization\n self.settings_screen = NewScreen(name='Settings')\n\n #\n def submit(self):\n prompt = self.menu_screen.promptObject.text\n result = prompt\n\n # optional, deferred creation\n # self.settings_screen = NewScreen(name='Settings')\n\n # add to the screen manager\n self.sm.add_widget(self.settings_screen)\n # enter the value into your other screen\n self.settings_screen.display(result)\n\n # switch to this screen\n self.sm.current=\"Settings\"\n\n def build(self) -> ScreenManager:\n # could create this screen right away, depending...\n # self.sm.add_widget(self.settings_screen)\n # of course you need the main screen\n self.sm.add_widget(self.menu_screen)\n # redundant, unless you create all screens at the beginning\n self.sm.current = 'menu'\n return self.sm\n\n\nif __name__ == '__main__':\n TestApp().run()\n\n"
] | [
0
] | [] | [] | [
"kivy",
"kivy_language",
"python"
] | stackoverflow_0074675350_kivy_kivy_language_python.txt |
Q:
Within a Scrapy bot, I cannot increment a global variable (but can assign the same variable). Why?
I know that using global variables is not a good idea and I plan to do something different. But, while playing around I ran into a strange global variable issue within Scrapy. In pure python, I don't see this problem.
When I run this bot code:
import scrapy
from tutorial.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["lib-web.org"]
start_urls = [
"http://www.lib-web.org/united-states/public-libraries/michigan/"
]
count = 0
def parse(self, response):
for sel in response.xpath('//div/div/div/ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('p/text()').extract()
global count;
count += 1
print count
yield item
DmozItem:
import scrapy
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
I get this error:
File "/Users/Admin/scpy_projs/tutorial/tutorial/spiders/dmoz_spider.py", line 22, in parse
count += 1
NameError: global name 'count' is not defined
But if I simply change 'count += 1' to just 'count = 1', it runs fine.
What's going on here? Why can I not increment the variable?
Again, if I run similar code outside of a Scrapy context, in pure Python, it runs fine. Here's the code:
count = 0
def doIt():
global count
for i in range(0, 10):
count +=1
doIt()
doIt()
print count
Resulting in:
Admin$ python count_test.py
20
A:
count is a class variable in your example, so you should access it using self.count. It solves the error, but maybe what you really need is an instance variable, because as a class variable, count is shared between all the instances of the class.
Assigning count = 1 inside the parse method works because it creates a new local variable called count, which is different from the class variable count.
Your pure Python example works because you did not define a class, but a function instead, and the variable count you created there has global scope, which is accessible from the function scope.
A:
U can use self.count. because this is class varriable.
| Within a Scrapy bot, I cannot increment a global variable (but can assign the same variable). Why? | I know that using global variables is not a good idea and I plan to do something different. But, while playing around I ran into a strange global variable issue within Scrapy. In pure python, I don't see this problem.
When I run this bot code:
import scrapy
from tutorial.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["lib-web.org"]
start_urls = [
"http://www.lib-web.org/united-states/public-libraries/michigan/"
]
count = 0
def parse(self, response):
for sel in response.xpath('//div/div/div/ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('p/text()').extract()
global count;
count += 1
print count
yield item
DmozItem:
import scrapy
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
I get this error:
File "/Users/Admin/scpy_projs/tutorial/tutorial/spiders/dmoz_spider.py", line 22, in parse
count += 1
NameError: global name 'count' is not defined
But if I simply change 'count += 1' to just 'count = 1', it runs fine.
What's going on here? Why can I not increment the variable?
Again, if I run similar code outside of a Scrapy context, in pure Python, it runs fine. Here's the code:
count = 0
def doIt():
global count
for i in range(0, 10):
count +=1
doIt()
doIt()
print count
Resulting in:
Admin$ python count_test.py
20
| [
"count is a class variable in your example, so you should access it using self.count. It solves the error, but maybe what you really need is an instance variable, because as a class variable, count is shared between all the instances of the class.\nAssigning count = 1 inside the parse method works because it creates a new local variable called count, which is different from the class variable count.\nYour pure Python example works because you did not define a class, but a function instead, and the variable count you created there has global scope, which is accessible from the function scope.\n",
"U can use self.count. because this is class varriable.\n"
] | [
4,
0
] | [] | [] | [
"python_2.7",
"scrapy"
] | stackoverflow_0033712868_python_2.7_scrapy.txt |
Q:
How to get input intensity of gamepad using old input manager?
I’m looking for a way to get the input intensity of a controller joystick when using a Xbox/PS5 Controller in Unity using the old Input Manager. Anything would help, I can’t find any resources on this subject, thanks!
Currently, the only results I’m getting are -1,0,1 using the old input manager
A:
I think you already have your desired output. The method Input.GetAxis() returns a value between -1 and 1, representing the intensity of the joystick input. A value of -1 indicates that the joystick is fully pressed to the left, a value of 0 indicates that the joystick is not being pressed, and a value of 1 indicates that the joystick is fully pressed to the right.
To get the input intensity of a specific joystick, such as the left joystick on an Xbox controller, you can use the following code:
float leftJoystickIntensity = Input.GetAxis("LeftJoystickX");
| How to get input intensity of gamepad using old input manager? | I’m looking for a way to get the input intensity of a controller joystick when using a Xbox/PS5 Controller in Unity using the old Input Manager. Anything would help, I can’t find any resources on this subject, thanks!
Currently, the only results I’m getting are -1,0,1 using the old input manager
| [
"I think you already have your desired output. The method Input.GetAxis() returns a value between -1 and 1, representing the intensity of the joystick input. A value of -1 indicates that the joystick is fully pressed to the left, a value of 0 indicates that the joystick is not being pressed, and a value of 1 indicates that the joystick is fully pressed to the right.\nTo get the input intensity of a specific joystick, such as the left joystick on an Xbox controller, you can use the following code:\nfloat leftJoystickIntensity = Input.GetAxis(\"LeftJoystickX\");\n\n"
] | [
0
] | [] | [] | [
"unity3d",
"unityscript"
] | stackoverflow_0074635680_unity3d_unityscript.txt |
Q:
Webpack has been initialized using a configuration object that does not match the API schema error
i have a next.js app and i got this error when i run 'npm run dev': "Invalid configuration object. Webpack has been initialized using a configuration object that does not match the API schema."
here is my next.config.js
`
module.exports = {
webpack: (config) => {
const newLocal = config.node = {
fs: 'empty'
};
return config
}
};
`
and here is my package.json
`
{
"name": "create-next-app-default",
"version": "0.1.0",
"description": "",
"main": "index.js",
"scripts": {
"dev": "next",
"build": "next build",
"start": "next start"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@chakra-ui/icons": "^2.0.13",
"@chakra-ui/react": "^2.4.2",
"@emotion/react": "^11.10.5",
"@emotion/styled": "^11.10.5",
"@solana/wallet-adapter-base": "^0.9.19",
"@solana/wallet-adapter-react": "^0.15.25",
"@solana/wallet-adapter-react-ui": "^0.9.23",
"@solana/wallet-adapter-wallets": "^0.19.7",
"@solana/web3.js": "^1.69.0",
"framer-motion": "^6.5.1",
"next": "^13.0.6",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"webpack": "^5.75.0",
"webpack-dev-server": "^4.11.1"
}
}
`
| Webpack has been initialized using a configuration object that does not match the API schema error | i have a next.js app and i got this error when i run 'npm run dev': "Invalid configuration object. Webpack has been initialized using a configuration object that does not match the API schema."
here is my next.config.js
`
module.exports = {
webpack: (config) => {
const newLocal = config.node = {
fs: 'empty'
};
return config
}
};
`
and here is my package.json
`
{
"name": "create-next-app-default",
"version": "0.1.0",
"description": "",
"main": "index.js",
"scripts": {
"dev": "next",
"build": "next build",
"start": "next start"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@chakra-ui/icons": "^2.0.13",
"@chakra-ui/react": "^2.4.2",
"@emotion/react": "^11.10.5",
"@emotion/styled": "^11.10.5",
"@solana/wallet-adapter-base": "^0.9.19",
"@solana/wallet-adapter-react": "^0.15.25",
"@solana/wallet-adapter-react-ui": "^0.9.23",
"@solana/wallet-adapter-wallets": "^0.19.7",
"@solana/web3.js": "^1.69.0",
"framer-motion": "^6.5.1",
"next": "^13.0.6",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"webpack": "^5.75.0",
"webpack-dev-server": "^4.11.1"
}
}
`
| [] | [] | [
"It looks like there is a syntax error in your next.config.js file. The const newLocal = assignment statement is unnecessary and causes the error you are seeing.\nTo fix this error, you can remove the const newLocal = statement and update your code as follows:\nmodule.exports = {\n webpack: (config) => {\n config.node = {\n fs: 'empty'\n };\n return config\n }\n};\n\n"
] | [
-1
] | [
"javascript",
"next.js",
"reactjs",
"solana",
"webpack"
] | stackoverflow_0074677039_javascript_next.js_reactjs_solana_webpack.txt |
Q:
How to get the most recent message of a channel in discord.py?
Is there a way to get the most recent message of a specific channel using discord.py? I looked at the official docs and didn't find a way to.
A:
I've now figured it out by myself:
For a discord.Client class you just need these lines of code for the last message:
(await self.get_channel(CHANNEL_ID).history(limit=1).flatten())[0]
If you use a discord.ext.commands.Bot @thegamecracks' answer is correct.
A:
(Answer uses discord.ext.commands.Bot instead of discord.Client; I haven't worked with the lower level parts of the API, so this may not apply to discord.Client)
In this case, you can use Bot.get_channel(ID) to acquire the channel you want to inspect.
channel = self.bot.get_channel(int(ID))
Then, you can use channel.last_message_id to get the ID of the last message, and acquire the message with channel.fetch_message(ID).
message = await channel.fetch_message(
channel.last_message_id)
Combined, a command to get the last message of a channel may look like this:
@commands.command(
name='getlastmessage')
async def client_getlastmessage(self, ctx, ID):
"""Get the last message of a text channel."""
channel = self.bot.get_channel(int(ID))
if channel is None:
await ctx.send('Could not find that channel.')
return
# NOTE: get_channel can return a TextChannel, VoiceChannel,
# or CategoryChannel. You may want to add a check to make sure
# the ID is for text channels only
message = await channel.fetch_message(
channel.last_message_id)
# NOTE: channel.last_message_id could return None; needs a check
await ctx.send(
f'Last message in {channel.name} sent by {message.author.name}:\n'
+ message.content
)
# NOTE: message may need to be trimmed to fit within 2000 chars
A:
Another way
for the newer versions
messages = [message async for message in interaction.guild.get_channel(0).history(limit=12)]
messages[0].content
| How to get the most recent message of a channel in discord.py? | Is there a way to get the most recent message of a specific channel using discord.py? I looked at the official docs and didn't find a way to.
| [
"I've now figured it out by myself:\nFor a discord.Client class you just need these lines of code for the last message:\n\n(await self.get_channel(CHANNEL_ID).history(limit=1).flatten())[0]\n\n\nIf you use a discord.ext.commands.Bot @thegamecracks' answer is correct.\n",
"(Answer uses discord.ext.commands.Bot instead of discord.Client; I haven't worked with the lower level parts of the API, so this may not apply to discord.Client)\nIn this case, you can use Bot.get_channel(ID) to acquire the channel you want to inspect.\nchannel = self.bot.get_channel(int(ID))\n\nThen, you can use channel.last_message_id to get the ID of the last message, and acquire the message with channel.fetch_message(ID).\nmessage = await channel.fetch_message(\n channel.last_message_id)\n\nCombined, a command to get the last message of a channel may look like this:\[email protected](\n name='getlastmessage')\nasync def client_getlastmessage(self, ctx, ID):\n \"\"\"Get the last message of a text channel.\"\"\"\n channel = self.bot.get_channel(int(ID))\n if channel is None:\n await ctx.send('Could not find that channel.')\n return\n # NOTE: get_channel can return a TextChannel, VoiceChannel,\n # or CategoryChannel. You may want to add a check to make sure\n # the ID is for text channels only\n\n message = await channel.fetch_message(\n channel.last_message_id)\n # NOTE: channel.last_message_id could return None; needs a check\n\n await ctx.send(\n f'Last message in {channel.name} sent by {message.author.name}:\\n'\n + message.content\n )\n # NOTE: message may need to be trimmed to fit within 2000 chars\n\n",
"Another way\nfor the newer versions\nmessages = [message async for message in interaction.guild.get_channel(0).history(limit=12)]\nmessages[0].content\n\n"
] | [
9,
6,
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0064080277_discord_discord.py_python.txt |
Q:
ECS FARGATE TASK definition with docker hub image
I want to use phpmyadmin public image from docker hub and configure ECS fargate task .But not sure how to simply put docker pull phpmyadmin command in ECS task definition.
Is there an option to do it directly from docker hub public repo? or should i build image locally, push to ECR and use that image?
A:
Inside of your task definition you would need to add your container definitions.
For the image value you would need to set the public image name copied from Docker Hub.
There's no need to push to ECR for this as it is already a public image.
A:
got it. not required to push to ECR. simply FROM and image command worked.
A:
You can simply write:
docker.io/<dockerhub_username>/<dockerhub_repository>:tag
in Image field
For phpmyadmin it should be:
docker.io/phpmyadmin:latest
| ECS FARGATE TASK definition with docker hub image | I want to use phpmyadmin public image from docker hub and configure ECS fargate task .But not sure how to simply put docker pull phpmyadmin command in ECS task definition.
Is there an option to do it directly from docker hub public repo? or should i build image locally, push to ECR and use that image?
| [
"Inside of your task definition you would need to add your container definitions.\nFor the image value you would need to set the public image name copied from Docker Hub.\nThere's no need to push to ECR for this as it is already a public image. \n",
"got it. not required to push to ECR. simply FROM and image command worked.\n",
"You can simply write:\ndocker.io/<dockerhub_username>/<dockerhub_repository>:tag\n\nin Image field\nFor phpmyadmin it should be:\ndocker.io/phpmyadmin:latest\n\n"
] | [
5,
1,
0
] | [] | [] | [
"amazon_ecs",
"amazon_web_services",
"aws_fargate",
"docker",
"phpmyadmin"
] | stackoverflow_0062246531_amazon_ecs_amazon_web_services_aws_fargate_docker_phpmyadmin.txt |
Q:
Is there any way to install and unpack a github repository through code without using git bash and the like?
Currently I have a problem where I need to install all contents of a github repository (https://github.com/reversinglabs/reversinglabs-yara-rules) through code without using git bash or the like.
In this case I need to fully install the yara repository from said github.
Any one knows a way to do it in c,c++,c#,python?
Unfortunately till now I have yet to succeed in any way.
A:
It's not clear what part of bash, etc, you do not want to use. A simple way otherwise is to just call git through std::system()
#include <cstdlib>
int main(int argc, char**argv) {
std::system("git clone ...");
}
I have used it in many cases where I need to integrate git commands in a c++ program.
A:
GitHub offers a zip download of all the code it hosts.
Use whatever language and library you like to do the equivalent of:
curl -o yara-rules.zip https://github.com/reversinglabs/reversinglabs-yara-rules/archive/refs/heads/develop.zip
unzip yara-rules.zip
A:
only if you are on linux you can use:
#Python
import os
url = input("Url: ")
os.system("git clone " + url)
or in c++
#include <iostream>
using namespace std;
int main()
{
string inputUrl;
cin >> inputUrl;
inputUrl = "git clone " + inputUrl;
system (inputUrl.c_str()); //You need to convert it with .c_str()
}
Hope that this can be useful!
| Is there any way to install and unpack a github repository through code without using git bash and the like? | Currently I have a problem where I need to install all contents of a github repository (https://github.com/reversinglabs/reversinglabs-yara-rules) through code without using git bash or the like.
In this case I need to fully install the yara repository from said github.
Any one knows a way to do it in c,c++,c#,python?
Unfortunately till now I have yet to succeed in any way.
| [
"It's not clear what part of bash, etc, you do not want to use. A simple way otherwise is to just call git through std::system()\n#include <cstdlib>\n\nint main(int argc, char**argv) {\n std::system(\"git clone ...\");\n}\n\nI have used it in many cases where I need to integrate git commands in a c++ program.\n",
"GitHub offers a zip download of all the code it hosts.\nUse whatever language and library you like to do the equivalent of:\ncurl -o yara-rules.zip https://github.com/reversinglabs/reversinglabs-yara-rules/archive/refs/heads/develop.zip \nunzip yara-rules.zip\n\n",
"only if you are on linux you can use:\n#Python \nimport os \nurl = input(\"Url: \")\nos.system(\"git clone \" + url)\n\nor in c++\n#include <iostream>\nusing namespace std;\nint main()\n{\n string inputUrl;\n cin >> inputUrl;\n inputUrl = \"git clone \" + inputUrl;\n system (inputUrl.c_str()); //You need to convert it with .c_str()\n}\n\nHope that this can be useful!\n"
] | [
1,
1,
0
] | [] | [] | [
"c#",
"c++",
"git",
"python"
] | stackoverflow_0074360270_c#_c++_git_python.txt |
Q:
'?' in where condition Laravel
I am try to get the relating to 'comment' model 'commentsRating' model.
return $this->hasMany(commentRating::class,'comment_id','id')->toSql();
And get the following output:
"select * from `comment_ratings` where `comment_ratings`.`comment_id` = ? and `comment_ratings`.`comment_id` is not null"
Comment model is:
class Comment extends Model
{
use HasFactory;
protected $fillable = [
'body',
'user_id',
'item_id'
];
protected $casts = [
'user_id' => 'integer',
'item_id' => 'integer',
];
public function author()
{
return $this->belongsTo(User::class, 'user_id');
}
public function post()
{
return $this->belongsTo(Items::class, 'id');
}
public function ratings()
{
return $this->hasMany(commentRating::class,'comment_id','id')->toSql();
}
}
And the rating model:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class commentRating extends Model
{
use HasFactory;
protected $guarded = [];
protected $fillable = [
'comment_id',
'user_id'
];
public function user()
{
return $this->belongsTo('App\Models\User');
}
public function comment()
{
return $this->belongsTo('App\Models\Comment');
}
}
A:
this is because laravel by default bind the variables to the query, you can use:
return $this->hasMany(commentRating::class,'comment_id','id')->dd();
this will return two values, the query with ?, and the array of values to bind
| '?' in where condition Laravel | I am try to get the relating to 'comment' model 'commentsRating' model.
return $this->hasMany(commentRating::class,'comment_id','id')->toSql();
And get the following output:
"select * from `comment_ratings` where `comment_ratings`.`comment_id` = ? and `comment_ratings`.`comment_id` is not null"
Comment model is:
class Comment extends Model
{
use HasFactory;
protected $fillable = [
'body',
'user_id',
'item_id'
];
protected $casts = [
'user_id' => 'integer',
'item_id' => 'integer',
];
public function author()
{
return $this->belongsTo(User::class, 'user_id');
}
public function post()
{
return $this->belongsTo(Items::class, 'id');
}
public function ratings()
{
return $this->hasMany(commentRating::class,'comment_id','id')->toSql();
}
}
And the rating model:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class commentRating extends Model
{
use HasFactory;
protected $guarded = [];
protected $fillable = [
'comment_id',
'user_id'
];
public function user()
{
return $this->belongsTo('App\Models\User');
}
public function comment()
{
return $this->belongsTo('App\Models\Comment');
}
}
| [
"this is because laravel by default bind the variables to the query, you can use:\nreturn $this->hasMany(commentRating::class,'comment_id','id')->dd();\n\nthis will return two values, the query with ?, and the array of values to bind\n"
] | [
2
] | [] | [] | [
"laravel",
"sql"
] | stackoverflow_0074677131_laravel_sql.txt |
Q:
How to send state from Login component to RequireAuth component in react? I am using React router
I am using react router version 6.3.0. I am trying to make protected route using RequireAuth. I am using Email, password login and posting the data using axios and tanstack/react-query. What I am trying to do is send the isLoading and user state from Login componet to RequireAuth component.
This is my App.js component
`
function App() {
return (
<div className="App">
<Routes>
<Route
path="/"
element={
<RequireAuth>
<Dashboard />
</RequireAuth>
}
>
<Route
path="products"
element={
<RequireAuth>
<Products />
</RequireAuth>
}
/>
<Route
path="products/create"
element={
<RequireAuth>
<CreateProduct />
</RequireAuth>
}
/>
</Route>
<Route path="/login" element={<Login />} />
</Routes>
<ToastContainer />
</div>
);
}
`
This is my Login component
`
import { useMutation } from "@tanstack/react-query";
import axios from "axios";
import React, { useEffect, useState } from "react";
import { useForm } from "react-hook-form";
import { useLocation, useNavigate } from "react-router-dom";
import useToken from "../../hooks/useToken";
import Loading from "../shared/Loading";
import RequireAuth from "./RequireAuth";
const Login = () => {
const [user, setUser] = useState(false);
const [error, setError] = useState("");
const navigate = useNavigate();
let location = useLocation();
let from = location.state?.from?.pathname || "/";
const { access_token, refresh_token } = useToken(user);
const {
register,
handleSubmit,
formState: { errors },
} = useForm();
let errorElement;
const userLogin = async (data) => {
const response = await axios.post(
"http://www.example.com/auth/login",
data
);
return response.data;
};
const { mutate, isLoading, isError } = useMutation(userLogin, {
onSuccess: (data) => {
setUser(data);
console.log("USER:", data);
},
onError: (error) => {
setError(error);
console.log("ERROR:", error);
},
});
if (isLoading) {
return <Loading />;
}
if (isError) {
errorElement = (
<p className=" px-1 pb-2">
<small className="text-red-500">Invalid user credentials</small>
</p>
);
}
const onSubmit = async (data) => {
const user = { ...data };
mutate(user);
};
return (
// JSX
// form is react hook form
);
};
export default Login;
`
RequireAuth component
`
import React from "react";
import { Navigate, useLocation } from "react-router-dom";
import Loading from "../Shared/Loading";
function RequireAuth({ children }) {
let location = useLocation();
// I am trying to use the isLoading state sent from Login component here
if (isLoading) {
return <Loading></Loading>;
}
// I am trying to use the user state sent from Login component here
if (!user) {
return <Navigate to="/login" state={{ from: location }} replace />;
}
return children;
}
export default RequireAuth;
`
A:
Can define the isLoading in App component level and pass it down to both components
const [isLoading, setIsLoading] = useState(false)
<Route
path="/"
element={
<RequireAuth isLoading={isLoading}>
<Dashboard />
</RequireAuth>
}
>
<Route path="/login" element={<Login setIsLoading={setIsLoading} />} />
Login
const Login = ({setIsLoading}) => {
useEffect(() => { setIsLoading(isLoading)}, [isLoading])
RequireAuth
function RequireAuth({ children,isLoading }) {
if (isLoading) {
return <Loading></Loading>;
}
So the same for user as well
| How to send state from Login component to RequireAuth component in react? I am using React router | I am using react router version 6.3.0. I am trying to make protected route using RequireAuth. I am using Email, password login and posting the data using axios and tanstack/react-query. What I am trying to do is send the isLoading and user state from Login componet to RequireAuth component.
This is my App.js component
`
function App() {
return (
<div className="App">
<Routes>
<Route
path="/"
element={
<RequireAuth>
<Dashboard />
</RequireAuth>
}
>
<Route
path="products"
element={
<RequireAuth>
<Products />
</RequireAuth>
}
/>
<Route
path="products/create"
element={
<RequireAuth>
<CreateProduct />
</RequireAuth>
}
/>
</Route>
<Route path="/login" element={<Login />} />
</Routes>
<ToastContainer />
</div>
);
}
`
This is my Login component
`
import { useMutation } from "@tanstack/react-query";
import axios from "axios";
import React, { useEffect, useState } from "react";
import { useForm } from "react-hook-form";
import { useLocation, useNavigate } from "react-router-dom";
import useToken from "../../hooks/useToken";
import Loading from "../shared/Loading";
import RequireAuth from "./RequireAuth";
const Login = () => {
const [user, setUser] = useState(false);
const [error, setError] = useState("");
const navigate = useNavigate();
let location = useLocation();
let from = location.state?.from?.pathname || "/";
const { access_token, refresh_token } = useToken(user);
const {
register,
handleSubmit,
formState: { errors },
} = useForm();
let errorElement;
const userLogin = async (data) => {
const response = await axios.post(
"http://www.example.com/auth/login",
data
);
return response.data;
};
const { mutate, isLoading, isError } = useMutation(userLogin, {
onSuccess: (data) => {
setUser(data);
console.log("USER:", data);
},
onError: (error) => {
setError(error);
console.log("ERROR:", error);
},
});
if (isLoading) {
return <Loading />;
}
if (isError) {
errorElement = (
<p className=" px-1 pb-2">
<small className="text-red-500">Invalid user credentials</small>
</p>
);
}
const onSubmit = async (data) => {
const user = { ...data };
mutate(user);
};
return (
// JSX
// form is react hook form
);
};
export default Login;
`
RequireAuth component
`
import React from "react";
import { Navigate, useLocation } from "react-router-dom";
import Loading from "../Shared/Loading";
function RequireAuth({ children }) {
let location = useLocation();
// I am trying to use the isLoading state sent from Login component here
if (isLoading) {
return <Loading></Loading>;
}
// I am trying to use the user state sent from Login component here
if (!user) {
return <Navigate to="/login" state={{ from: location }} replace />;
}
return children;
}
export default RequireAuth;
`
| [
"Can define the isLoading in App component level and pass it down to both components\nconst [isLoading, setIsLoading] = useState(false)\n <Route\n path=\"/\"\n element={\n <RequireAuth isLoading={isLoading}>\n <Dashboard />\n </RequireAuth>\n }\n >\n\n <Route path=\"/login\" element={<Login setIsLoading={setIsLoading} />} />\n\nLogin\n const Login = ({setIsLoading}) => {\n\n useEffect(() => { setIsLoading(isLoading)}, [isLoading])\n\nRequireAuth\nfunction RequireAuth({ children,isLoading }) {\n if (isLoading) {\n return <Loading></Loading>;\n }\n\nSo the same for user as well\n"
] | [
0
] | [] | [] | [
"javascript",
"react_router_dom",
"reactjs"
] | stackoverflow_0074677121_javascript_react_router_dom_reactjs.txt |
Q:
Disabled Window resize in Flutter Desktop
I'm building an app with Flutter(For desktop users) and it's kind of hard to make the app responsive for every screen size because of big forms and table that shouldn't be used on small screens (less than 13').
Is there any way to prevent users from resizing the windows ?
A:
by using package called desktop_window
pub.dev
then inside main , use these 3 lines and it will work ✔ :
void main() async {
Size size = await DesktopWindow.getWindowSize();
// setting min and max with the same size to prevent resizing
await DesktopWindow.setMinWindowSize(Size(1920,1080));
await DesktopWindow.setMaxWindowSize(Size(1920,1080));
}
| Disabled Window resize in Flutter Desktop | I'm building an app with Flutter(For desktop users) and it's kind of hard to make the app responsive for every screen size because of big forms and table that shouldn't be used on small screens (less than 13').
Is there any way to prevent users from resizing the windows ?
| [
"by using package called desktop_window\npub.dev\nthen inside main , use these 3 lines and it will work ✔ :\nvoid main() async {\nSize size = await DesktopWindow.getWindowSize();\n// setting min and max with the same size to prevent resizing\nawait DesktopWindow.setMinWindowSize(Size(1920,1080));\nawait DesktopWindow.setMaxWindowSize(Size(1920,1080));\n}\n\n"
] | [
0
] | [] | [] | [
"dart",
"flutter"
] | stackoverflow_0072645161_dart_flutter.txt |
Q:
Why does fractional computed line height cause css height transition to shake text?
I observed a weird effect today while working on something. I was using CSS height transition to change my website's header height and observed the whole website's text was shaking.
Eventually I was able to pinpoint the cause of it and it was fractional value of computed line height. Following is the effect:
.hover {
height: 20px;
overflow: hidden;
transition: height 1s ease;
}
.hover:hover {
height: 100px;
}
p {
font-size: 15px;
line-height: 1.3;
/*Computed line height is 19.5 -- fraction*/
}
<div class="hover">
Hover over me<br>
foo bar<br>
foo bar<br>
foo bar<br>
</div>
<p class="shake">
I will shakeI will shakeI will shake <br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
</p>
Compare this to non-fractional computed line-height text:
.hover {
height: 20px;
overflow: hidden;
transition: height 1s ease;
}
.hover:hover {
height: 100px;
}
p {
font-size: 15px;
line-height: 1.2;
/*Computed line height is 18 -- non-fraction*/
}
<div class="hover">
Hover over me<br>
foo bar<br>
foo bar<br>
foo bar<br>
</div>
<p class="shake">
I will shakeI will shakeI will shake <br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
</p>
So, why does it happen? What are possible ways to fix the shaking while keeping the fractional computed line-height?
A:
The shaking effect is likely due to the fact that fractional line-heights can cause the heights of elements to be inconsistent. When the height of an element changes, the text inside of it may not be positioned exactly as it was before, which can cause the shaking effect you are seeing.
One way to fix this problem is to set a consistent, non-fractional line-height for your text. This will make sure that your elements are always at the same height, which will stop the effect of shaking. You can do this by setting the line-height property to a whole number value, such as 1.0 or 1.2, rather than a fractional value like 1.3.
Another way to fix this problem is to use a CSS property called "box-sizing" to explicitly specify the dimensions of your elements. This will give you more precise control over the height of your elements and stop the shaking effect. You can use the box-sizing property like this:
.hover {
height: 20px;
overflow: hidden;
transition: height 1s ease;
box-sizing: border-box;
}
This will ensure that the height of your .hover element always stays at 20px, regardless of the computed line-height of the text inside of it. This will prevent the shaking effect you are seeing.
A:
As far as I tell it is a combination of factors.
Some shaking occurs for me for any line height, for both 1.2 and 1.3.
As far as I can tell it occurs due to rounding of fractional positioning for each render during the transition. A rounding of the fractional positioning will result from the height not being a non integer amount, because it is pushing down the text, and text line position being a non integer amount.
The height will be a non integer amount depending on the step size. 'ease' uses a default cubic-bezier function, which will result in a fractional height steps, also changing with time. This added to non integer line heights (x 1 for line 1, x 2 for line 2, etc.) would result in rendering position in the view port which will jump around depenending on the rounding resulting in floor or celing integers, which will almost certainly be different for eash transition step and for each line of text.
To my eye the best result was for an integer line-height, say 1.2, a height change from 20px to 120px, with a transition-timing-function of steps(100), that is each line of text is positionally rendered at an integer number of pixels.
A:
As far as I tell it is a combination of factors.
Some shaking occurs for me for any line height, for both 1.2 and 1.3.
As far as I can tell it occurs due to rounding of fractional positioning for each render during the transition. A rounding of the fractional positioning will result from the height not being a non integer amount, because it is pushing down the text, and text line position being a non integer amount.
To my eye the best result was for an integer line-height, say 1.2, a height change from 20px to 120px, with a transition-timing-function of steps(100), that is each line of text is positionally rendered at an integer number of pixels.
| Why does fractional computed line height cause css height transition to shake text? | I observed a weird effect today while working on something. I was using CSS height transition to change my website's header height and observed the whole website's text was shaking.
Eventually I was able to pinpoint the cause of it and it was fractional value of computed line height. Following is the effect:
.hover {
height: 20px;
overflow: hidden;
transition: height 1s ease;
}
.hover:hover {
height: 100px;
}
p {
font-size: 15px;
line-height: 1.3;
/*Computed line height is 19.5 -- fraction*/
}
<div class="hover">
Hover over me<br>
foo bar<br>
foo bar<br>
foo bar<br>
</div>
<p class="shake">
I will shakeI will shakeI will shake <br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
</p>
Compare this to non-fractional computed line-height text:
.hover {
height: 20px;
overflow: hidden;
transition: height 1s ease;
}
.hover:hover {
height: 100px;
}
p {
font-size: 15px;
line-height: 1.2;
/*Computed line height is 18 -- non-fraction*/
}
<div class="hover">
Hover over me<br>
foo bar<br>
foo bar<br>
foo bar<br>
</div>
<p class="shake">
I will shakeI will shakeI will shake <br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
I will shake I will shakeI will shake<br>
</p>
So, why does it happen? What are possible ways to fix the shaking while keeping the fractional computed line-height?
| [
"The shaking effect is likely due to the fact that fractional line-heights can cause the heights of elements to be inconsistent. When the height of an element changes, the text inside of it may not be positioned exactly as it was before, which can cause the shaking effect you are seeing.\nOne way to fix this problem is to set a consistent, non-fractional line-height for your text. This will make sure that your elements are always at the same height, which will stop the effect of shaking. You can do this by setting the line-height property to a whole number value, such as 1.0 or 1.2, rather than a fractional value like 1.3.\nAnother way to fix this problem is to use a CSS property called \"box-sizing\" to explicitly specify the dimensions of your elements. This will give you more precise control over the height of your elements and stop the shaking effect. You can use the box-sizing property like this:\n.hover {\nheight: 20px;\noverflow: hidden;\ntransition: height 1s ease;\nbox-sizing: border-box;\n}\n\nThis will ensure that the height of your .hover element always stays at 20px, regardless of the computed line-height of the text inside of it. This will prevent the shaking effect you are seeing.\n",
"As far as I tell it is a combination of factors.\nSome shaking occurs for me for any line height, for both 1.2 and 1.3.\nAs far as I can tell it occurs due to rounding of fractional positioning for each render during the transition. A rounding of the fractional positioning will result from the height not being a non integer amount, because it is pushing down the text, and text line position being a non integer amount.\nThe height will be a non integer amount depending on the step size. 'ease' uses a default cubic-bezier function, which will result in a fractional height steps, also changing with time. This added to non integer line heights (x 1 for line 1, x 2 for line 2, etc.) would result in rendering position in the view port which will jump around depenending on the rounding resulting in floor or celing integers, which will almost certainly be different for eash transition step and for each line of text.\nTo my eye the best result was for an integer line-height, say 1.2, a height change from 20px to 120px, with a transition-timing-function of steps(100), that is each line of text is positionally rendered at an integer number of pixels.\n",
"As far as I tell it is a combination of factors.\nSome shaking occurs for me for any line height, for both 1.2 and 1.3.\nAs far as I can tell it occurs due to rounding of fractional positioning for each render during the transition. A rounding of the fractional positioning will result from the height not being a non integer amount, because it is pushing down the text, and text line position being a non integer amount.\nTo my eye the best result was for an integer line-height, say 1.2, a height change from 20px to 120px, with a transition-timing-function of steps(100), that is each line of text is positionally rendered at an integer number of pixels.\n"
] | [
0,
0,
0
] | [] | [] | [
"css",
"css_transitions",
"html"
] | stackoverflow_0073783730_css_css_transitions_html.txt |
Q:
Prolog membership operation on a list which returns the index of the found element
I have currently started with PROLOG and I want to write a predicate which checks if a given object is in this list or not. If the object is in the list the predicate should return the index of the element. If the element is not found it should return 0.
It should work like this: find(3,[1,4,5,3,2,3],N). -> yes. N / 4
find(2,[1,3,4,5,6,7],N). -> yes. N / 0
But I have problems with counting up the index N and maybe someone here can help. I've seen many different ways on how to traverse a list but I found so many different ways and I wasn't able to understand how they work. I would be really happy if someone can provide a solution and explain how it works and why.
This is what I wrote so far:
find(X, [X|TAIL], N) :- N is 1, write(N).
find(X, [], N) :- N is 0, write(N).
find(X, [_|TAIL], N) :- find(X, TAIL, N + 1).
It is working for the basecases:
find(a, [a, b, c, d, e, f, g], N) -> yes. N / 1.
find(j, [a, b, c, d, e, f, g], N) -> yes. N / 0.
But when it is starting with recursion It is not working anymore and I don't understand what's going wrong.
For the recursion case it gives me this: find(b, [a, b, c, d, e, f, g], N) -> no.
I am thankful for every answer and every comment!
A:
Using a descriptive predicate name:
nth1_once_else_0(Elem, Lst, Nth1) :-
% Start at element 1
nth1_once_else_0_(Lst, Elem, 1, Nth1),
% Stop after finding 1 solution
!.
% Otherwise, succeed with 0
nth1_once_else_0(_Elem, _Lst, 0).
% Using Upto and Nth1 arguments, rather than unnecessary & slow recursion
nth1_once_else_0_([Elem|_], Elem, Nth1, Nth1).
nth1_once_else_0_([_|T], Elem, Upto, Nth1) :-
% Loop through the list elements
Upto1 is Upto + 1,
nth1_once_else_0_(T, Elem, Upto1, Nth1).
Results in swi-prolog:
?- nth1_once_else_0(c, [a, b, c, a, b, c], Nth1).
Nth1 = 3.
?- nth1_once_else_0(z, [a, b, c, a, b, c], Nth1).
Nth1 = 0.
?- nth1_once_else_0(Char, [a, b, c, a, b, c], Nth1).
Char = a,
Nth1 = 1.
?- nth1_once_else_0(Char, [a, b, c, a, b, c], 2).
Char = b.
?- nth1_once_else_0(b, [a, b, c, a, b, c], 3).
false.
A:
find_([], _, _, 0).
find_([X|_], X, Counter, Counter).
find_([_|T], X, Counter, Result) :-
succ(Counter, Counter2),
find_(T, X, Counter2, Result).
find(X, List, Index) :-
find_(List, X, 1, Index).
This uses a traditional helper predicate to start a counter at 1, and swap the order of X and the List so the list comes first and SWI Prolog can execute it more efficiently.
The helper increments the counter with succ/2, and either uses the current counter value as the index if X is the head of the list, or searches X in the tail of the list. When the list is empty, the index is 0.
This behaves like so:
?- find(b, [a,b,c,d,a,b,c], X).
X = 2 ;
X = 6 ;
X = 0
A:
I tried for a long time and I coded something that worked for me and I thought that maybe it would be good to post my code in a seperate answer.
The last update on my code that I gave was the following:
find(X [], 0).
find(X, [X|_], 1).
find(X, [_|Xs], N) :- find(X, Xs, Rest), N is 1 + Rest.
This worked fine for queries like this ?- find(a, [a, b, c, d], N) -> yes. N = 1. or ?- find(c, [a, b, c, d], N) -> yes. N = 1.
But it didn't work for the following form of queries: ?- find(d, [a, b, c], N) -> yes. N = 3. It is supposed to give me the following answer: ?- find(d, [a, b, c], N) -> yes. N = 0.
I figured out, that the problem was when the list was worked off, it counted so long until there is no element left in the list and then it just adds 0 to the existing counter. So N would always be the length of the given list. I then wrote some more rules that "store" the list from the beginning and I wrote another rule for calculating the length of the list. Now if the list is worked off and there are 0 elements in the list, it has to set N to the negative length of the list. With that we are always getting N = 0 when the list is empty.
I try to illustrate the problem here again with an example query: ?- find(d, [a, b, c], N).
It first matches the rule in the third line, where it has to call find(d, [b,c], Rest).
N is now: 1 + Rest.
After the find(d, [b,c], Rest) call it calls find(d, [c], Rest).
N is now 1 + (1 + Rest).
After the find(d, [c], Rest) call it calls 'find(d, [], 0)'.
N is now 1 + (1 + (1 + Rest)). And Rest = 0, so we get:
N = 1 + (1 + (1 + 0)) = 3.
I hope that I explained the problem so that everyone can understand it. I solved it like that:
find(X, [], 0).
find(X, [X|_], 1).
find(X, [], -N, List) :- list_length(List, LENGTH), N is LENGTH, write(LENGTH).
find(X, [X|_], 1, List).
find(X, [Y|Xs], N, List) :- find(X, Xs, Rest, List), N is 1 + Rest.
find(X, [Y|Xs], N) :- find(X, Xs, Rest, [Y|Xs]), N is 1 + Rest.
list_length([], 0).
list_length([X|Xs], LENGTH) :- list_length(Xs, Rest), LENGTH is 1 + Rest.
I am basically doing the same as before but now I am carrying the starting list with me through the recursion. At the point where I reach the empty list I am calculating the length of the starting list and instead of returning 0 I return the negative length of the starting list. The calculation of N would then look like the following:
N = 1 + (1 + (1 + (-3))) = 0.
At the position of the -3 there was a 0 in the old version.
I hope that I explained this well. I am still a beginner and I am thankful to everyone who can give me tips or recommendations.
| Prolog membership operation on a list which returns the index of the found element | I have currently started with PROLOG and I want to write a predicate which checks if a given object is in this list or not. If the object is in the list the predicate should return the index of the element. If the element is not found it should return 0.
It should work like this: find(3,[1,4,5,3,2,3],N). -> yes. N / 4
find(2,[1,3,4,5,6,7],N). -> yes. N / 0
But I have problems with counting up the index N and maybe someone here can help. I've seen many different ways on how to traverse a list but I found so many different ways and I wasn't able to understand how they work. I would be really happy if someone can provide a solution and explain how it works and why.
This is what I wrote so far:
find(X, [X|TAIL], N) :- N is 1, write(N).
find(X, [], N) :- N is 0, write(N).
find(X, [_|TAIL], N) :- find(X, TAIL, N + 1).
It is working for the basecases:
find(a, [a, b, c, d, e, f, g], N) -> yes. N / 1.
find(j, [a, b, c, d, e, f, g], N) -> yes. N / 0.
But when it is starting with recursion It is not working anymore and I don't understand what's going wrong.
For the recursion case it gives me this: find(b, [a, b, c, d, e, f, g], N) -> no.
I am thankful for every answer and every comment!
| [
"Using a descriptive predicate name:\nnth1_once_else_0(Elem, Lst, Nth1) :-\n % Start at element 1\n nth1_once_else_0_(Lst, Elem, 1, Nth1),\n % Stop after finding 1 solution\n !.\n% Otherwise, succeed with 0\nnth1_once_else_0(_Elem, _Lst, 0).\n\n% Using Upto and Nth1 arguments, rather than unnecessary & slow recursion\nnth1_once_else_0_([Elem|_], Elem, Nth1, Nth1).\nnth1_once_else_0_([_|T], Elem, Upto, Nth1) :-\n % Loop through the list elements\n Upto1 is Upto + 1,\n nth1_once_else_0_(T, Elem, Upto1, Nth1).\n\nResults in swi-prolog:\n?- nth1_once_else_0(c, [a, b, c, a, b, c], Nth1).\nNth1 = 3.\n\n?- nth1_once_else_0(z, [a, b, c, a, b, c], Nth1).\nNth1 = 0.\n\n?- nth1_once_else_0(Char, [a, b, c, a, b, c], Nth1).\nChar = a,\nNth1 = 1.\n\n?- nth1_once_else_0(Char, [a, b, c, a, b, c], 2).\nChar = b.\n\n?- nth1_once_else_0(b, [a, b, c, a, b, c], 3).\nfalse.\n\n",
"find_([], _, _, 0).\nfind_([X|_], X, Counter, Counter).\nfind_([_|T], X, Counter, Result) :-\n succ(Counter, Counter2),\n find_(T, X, Counter2, Result).\n\nfind(X, List, Index) :-\n find_(List, X, 1, Index).\n\nThis uses a traditional helper predicate to start a counter at 1, and swap the order of X and the List so the list comes first and SWI Prolog can execute it more efficiently.\nThe helper increments the counter with succ/2, and either uses the current counter value as the index if X is the head of the list, or searches X in the tail of the list. When the list is empty, the index is 0.\nThis behaves like so:\n?- find(b, [a,b,c,d,a,b,c], X).\nX = 2 ;\nX = 6 ;\nX = 0\n\n",
"I tried for a long time and I coded something that worked for me and I thought that maybe it would be good to post my code in a seperate answer.\nThe last update on my code that I gave was the following:\nfind(X [], 0).\nfind(X, [X|_], 1).\nfind(X, [_|Xs], N) :- find(X, Xs, Rest), N is 1 + Rest.\n\nThis worked fine for queries like this ?- find(a, [a, b, c, d], N) -> yes. N = 1. or ?- find(c, [a, b, c, d], N) -> yes. N = 1.\nBut it didn't work for the following form of queries: ?- find(d, [a, b, c], N) -> yes. N = 3. It is supposed to give me the following answer: ?- find(d, [a, b, c], N) -> yes. N = 0.\nI figured out, that the problem was when the list was worked off, it counted so long until there is no element left in the list and then it just adds 0 to the existing counter. So N would always be the length of the given list. I then wrote some more rules that \"store\" the list from the beginning and I wrote another rule for calculating the length of the list. Now if the list is worked off and there are 0 elements in the list, it has to set N to the negative length of the list. With that we are always getting N = 0 when the list is empty.\nI try to illustrate the problem here again with an example query: ?- find(d, [a, b, c], N).\nIt first matches the rule in the third line, where it has to call find(d, [b,c], Rest).\nN is now: 1 + Rest.\nAfter the find(d, [b,c], Rest) call it calls find(d, [c], Rest).\nN is now 1 + (1 + Rest).\nAfter the find(d, [c], Rest) call it calls 'find(d, [], 0)'.\nN is now 1 + (1 + (1 + Rest)). And Rest = 0, so we get:\nN = 1 + (1 + (1 + 0)) = 3.\nI hope that I explained the problem so that everyone can understand it. I solved it like that:\nfind(X, [], 0).\nfind(X, [X|_], 1).\n\nfind(X, [], -N, List) :- list_length(List, LENGTH), N is LENGTH, write(LENGTH).\nfind(X, [X|_], 1, List).\nfind(X, [Y|Xs], N, List) :- find(X, Xs, Rest, List), N is 1 + Rest.\n\nfind(X, [Y|Xs], N) :- find(X, Xs, Rest, [Y|Xs]), N is 1 + Rest.\n\nlist_length([], 0).\nlist_length([X|Xs], LENGTH) :- list_length(Xs, Rest), LENGTH is 1 + Rest.\n\nI am basically doing the same as before but now I am carrying the starting list with me through the recursion. At the point where I reach the empty list I am calculating the length of the starting list and instead of returning 0 I return the negative length of the starting list. The calculation of N would then look like the following:\nN = 1 + (1 + (1 + (-3))) = 0.\nAt the position of the -3 there was a 0 in the old version.\nI hope that I explained this well. I am still a beginner and I am thankful to everyone who can give me tips or recommendations.\n"
] | [
1,
0,
0
] | [] | [] | [
"functional_programming",
"prolog"
] | stackoverflow_0074671829_functional_programming_prolog.txt |
Q:
CDK Pipelines with CodePipeline cannot deploy large CloudFormation templates
It seems there is a bug in AWS CodePipelines and/or CDK Pipelines specifically which means that the synthesised CloudFormation assets cannot be deployed in the Assets stage because of the following error:
Template format error: JSON not well-formed. (line 1135, column 4) (Service: AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request ID: XXXXXXXX; Proxy: null)
But the CloudFormation can be deployed from the local machine using CDK Deploy, bypassing the CodePipeline which suggest the synthesised CloudFormation template is fine. It seems this happens when the CloudFormation template gets too big. It can sometimes be resolved by breaking up the project into multiple stacks. However when you have an AppSync API for example it becomes impractical to break this up. Has anyone experience this issue and found a work around? There is a related github issue but it appears to have gone quiet.
A:
A workaround for this problem seems to be to break up the AppSync stack so that the schema and main API resource are defined in one stack and all the resolvers are defined in one or more different stacks which import the API by id, which splits the CloudFormation templates so that CodePipeline can deploy them. This works but it seems hacky and presumably is not intended. Any better ideas would be welcome.
| CDK Pipelines with CodePipeline cannot deploy large CloudFormation templates | It seems there is a bug in AWS CodePipelines and/or CDK Pipelines specifically which means that the synthesised CloudFormation assets cannot be deployed in the Assets stage because of the following error:
Template format error: JSON not well-formed. (line 1135, column 4) (Service: AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request ID: XXXXXXXX; Proxy: null)
But the CloudFormation can be deployed from the local machine using CDK Deploy, bypassing the CodePipeline which suggest the synthesised CloudFormation template is fine. It seems this happens when the CloudFormation template gets too big. It can sometimes be resolved by breaking up the project into multiple stacks. However when you have an AppSync API for example it becomes impractical to break this up. Has anyone experience this issue and found a work around? There is a related github issue but it appears to have gone quiet.
| [
"A workaround for this problem seems to be to break up the AppSync stack so that the schema and main API resource are defined in one stack and all the resolvers are defined in one or more different stacks which import the API by id, which splits the CloudFormation templates so that CodePipeline can deploy them. This works but it seems hacky and presumably is not intended. Any better ideas would be welcome.\n"
] | [
0
] | [] | [] | [
"amazon_cloudformation",
"amazon_web_services",
"aws_cdk",
"aws_codepipeline"
] | stackoverflow_0074662154_amazon_cloudformation_amazon_web_services_aws_cdk_aws_codepipeline.txt |
Q:
Is it possible to define a string in Flutter *.arb file that should not be translatable?
In the Android strings.xml file this can be done using translatable attribute:
<string name="inches" translatable="false">in</string>
I could not find any solution for this for the Flutter localization *.arb files.
A:
Yes, it is possible to define a string in a Flutter *.arb file that should not be translatable. To do this, you can add the @ character at the beginning of the string, like this:
{
"inches": "@in"
}
In this example, the inches string will be defined as in in the *.arb file, and this value will not be translated in other languages.
The @ character is used to mark a string as not translatable in the *.arb file. This is similar to the translatable="false" attribute in Android's strings.xml file that you mentioned.
| Is it possible to define a string in Flutter *.arb file that should not be translatable? | In the Android strings.xml file this can be done using translatable attribute:
<string name="inches" translatable="false">in</string>
I could not find any solution for this for the Flutter localization *.arb files.
| [
"Yes, it is possible to define a string in a Flutter *.arb file that should not be translatable. To do this, you can add the @ character at the beginning of the string, like this:\n{\n \"inches\": \"@in\"\n}\n\nIn this example, the inches string will be defined as in in the *.arb file, and this value will not be translated in other languages.\nThe @ character is used to mark a string as not translatable in the *.arb file. This is similar to the translatable=\"false\" attribute in Android's strings.xml file that you mentioned.\n"
] | [
1
] | [] | [] | [
"flutter",
"flutter_localizations"
] | stackoverflow_0074675736_flutter_flutter_localizations.txt |
Q:
How to create a GitHub Desktop repository with more folders each with its project
When following steps:
GitHub Desktop/File/Add Local Repository/XYFolderWithXcodeproj
And the following steps via as per video "Github Desktop Tutorial on a Mac" starting from (2:32), I could create repository when the XYFolderWithXcodeproj folder did contain only one project.
However, in case the XYFolderWithXcodeproj folder does contain two folders each with its projects, the GitHub Desktop does initiate the repository that was already created months ago and with directory path only up to Documents folder.
(I did remove it, still is was not possible to remove it with having selected the option "Also move this repository to Trash".
When trying to do this, there was message: "Failed to move the repository directly to Trash. A common reason for this is that the directory or one of its files is open in another program.)
Would like to ask for advice what steps to follow to create a GitHub Desktop repository with more folders each with its project.
A:
Failed to move the repository directly to Trash.
A common reason for this is that the directory or one of its files
is open in another program
Then try and close GitHub Desktop, then, from a file Explorer, delete the local repository you do not need, and re-open GitHub Desktop.
| How to create a GitHub Desktop repository with more folders each with its project | When following steps:
GitHub Desktop/File/Add Local Repository/XYFolderWithXcodeproj
And the following steps via as per video "Github Desktop Tutorial on a Mac" starting from (2:32), I could create repository when the XYFolderWithXcodeproj folder did contain only one project.
However, in case the XYFolderWithXcodeproj folder does contain two folders each with its projects, the GitHub Desktop does initiate the repository that was already created months ago and with directory path only up to Documents folder.
(I did remove it, still is was not possible to remove it with having selected the option "Also move this repository to Trash".
When trying to do this, there was message: "Failed to move the repository directly to Trash. A common reason for this is that the directory or one of its files is open in another program.)
Would like to ask for advice what steps to follow to create a GitHub Desktop repository with more folders each with its project.
| [
"\nFailed to move the repository directly to Trash. \nA common reason for this is that the directory or one of its files\nis open in another program\n\n\nThen try and close GitHub Desktop, then, from a file Explorer, delete the local repository you do not need, and re-open GitHub Desktop.\n"
] | [
0
] | [] | [] | [
"directory",
"github",
"github_desktop",
"xcode"
] | stackoverflow_0074677034_directory_github_github_desktop_xcode.txt |
Q:
WP_Query: Search by Taxonomy Name
Problem: WP_Query search string needs to look into the taxonomy name.
By default, $args['s'] only searches inside the title, content, and excerpt, but I need to search inside a custom taxonomy name too. I tried with tax_query, but it doesn't have the name__like or LIKE operator for tax_query.
So, Now I'm trying to achieve it with the custom query; here's my code:
$keyword = "Monster";
$args['s'] = $keyword;
add_filter('posts_join', 'filter_post_join', 10, 1);
add_filter('posts_where', 'filter_post_where', 10, 1);
$posts = new WP_Query($args);
function filter_post_join( $join ) {
global $wpdb;
$join .= " LEFT JOIN $wpdb->term_relationships as txr ON ( {$wpdb->posts}.ID = txr.object_id )";
$join .= " LEFT JOIN $wpdb->terms as trms on ( txr.term_taxonomy_id = trms.term_id )";
return $join;
}
function filter_post_where( $where ) {
global $keyword;
$where .= " AND trms.name LIKE $keyword";
return $where;
}
Can anyone please tell me what I am doing wrong here?
I found a solution that kind of works: https://gist.github.com/markoman78/e22bbebe0d2305f294eb554d5a39d8c3
But it has a problem, it doesn't filter post_type; if I want to fetch posts from 'x' post type it will also fetch posts from other posts type that has matching tags name
A:
To search inside a custom taxonomy name, you can use the tax_query parameter of WP_Query. The tax_query parameter allows you to specify which taxonomies and terms to include or exclude from the query results. You can use the 'name' and 'field' parameters to specify which taxonomy name to search for and which field to search in.
Here's an example of how you can use the tax_query parameter to search for a keyword in the 'name' field of a custom taxonomy:
$keyword = "Monster";
$args = array(
's' => $keyword,
'tax_query' => array(
array(
'taxonomy' => 'custom_taxonomy',
'field' => 'name',
'terms' => $keyword
)
)
);
$posts = new WP_Query($args);
This will search for the keyword 'Monster' in the 'name' field of the 'custom_taxonomy' taxonomy and return the results that match.
Alternatively, you can use the 'name__like' operator in the tax_query parameter to search for keywords that are similar to the provided keyword. For example:
$keyword = "Monster";
$args = array(
's' => $keyword,
'tax_query' => array(
array(
'taxonomy' => 'custom_taxonomy',
'field' => 'name',
'name__like' => $keyword
)
)
);
$posts = new WP_Query($args);
This will search for keywords similar to 'Monster' in the 'name' field of the 'custom_taxonomy' taxonomy and return the results that match.
You can also specify the post type you want to search in using the 'post_type' parameter in the $args array. For example:
$keyword = "Monster";
$args = array(
's' => $keyword,
'post_type' => 'x',
'tax_query' => array(
array(
'taxonomy' => 'custom_taxonomy',
'field' => 'name',
'name__like' => $keyword
)
)
);
$posts = new WP_Query($args);
This will search for keywords similar to 'Monster' in the 'name' field of the 'custom_taxonomy' taxonomy and return the results that match the 'x' post type.
| WP_Query: Search by Taxonomy Name | Problem: WP_Query search string needs to look into the taxonomy name.
By default, $args['s'] only searches inside the title, content, and excerpt, but I need to search inside a custom taxonomy name too. I tried with tax_query, but it doesn't have the name__like or LIKE operator for tax_query.
So, Now I'm trying to achieve it with the custom query; here's my code:
$keyword = "Monster";
$args['s'] = $keyword;
add_filter('posts_join', 'filter_post_join', 10, 1);
add_filter('posts_where', 'filter_post_where', 10, 1);
$posts = new WP_Query($args);
function filter_post_join( $join ) {
global $wpdb;
$join .= " LEFT JOIN $wpdb->term_relationships as txr ON ( {$wpdb->posts}.ID = txr.object_id )";
$join .= " LEFT JOIN $wpdb->terms as trms on ( txr.term_taxonomy_id = trms.term_id )";
return $join;
}
function filter_post_where( $where ) {
global $keyword;
$where .= " AND trms.name LIKE $keyword";
return $where;
}
Can anyone please tell me what I am doing wrong here?
I found a solution that kind of works: https://gist.github.com/markoman78/e22bbebe0d2305f294eb554d5a39d8c3
But it has a problem, it doesn't filter post_type; if I want to fetch posts from 'x' post type it will also fetch posts from other posts type that has matching tags name
| [
"To search inside a custom taxonomy name, you can use the tax_query parameter of WP_Query. The tax_query parameter allows you to specify which taxonomies and terms to include or exclude from the query results. You can use the 'name' and 'field' parameters to specify which taxonomy name to search for and which field to search in.\nHere's an example of how you can use the tax_query parameter to search for a keyword in the 'name' field of a custom taxonomy:\n$keyword = \"Monster\";\n$args = array(\n's' => $keyword,\n'tax_query' => array(\narray(\n'taxonomy' => 'custom_taxonomy',\n'field' => 'name',\n'terms' => $keyword\n)\n)\n);\n\n$posts = new WP_Query($args);\n\nThis will search for the keyword 'Monster' in the 'name' field of the 'custom_taxonomy' taxonomy and return the results that match.\nAlternatively, you can use the 'name__like' operator in the tax_query parameter to search for keywords that are similar to the provided keyword. For example:\n$keyword = \"Monster\";\n$args = array(\n's' => $keyword,\n'tax_query' => array(\narray(\n'taxonomy' => 'custom_taxonomy',\n'field' => 'name',\n'name__like' => $keyword\n)\n)\n);\n\n$posts = new WP_Query($args);\n\nThis will search for keywords similar to 'Monster' in the 'name' field of the 'custom_taxonomy' taxonomy and return the results that match.\nYou can also specify the post type you want to search in using the 'post_type' parameter in the $args array. For example:\n$keyword = \"Monster\";\n$args = array(\n's' => $keyword,\n'post_type' => 'x',\n'tax_query' => array(\narray(\n'taxonomy' => 'custom_taxonomy',\n'field' => 'name',\n'name__like' => $keyword\n)\n)\n);\n\n$posts = new WP_Query($args);\n\nThis will search for keywords similar to 'Monster' in the 'name' field of the 'custom_taxonomy' taxonomy and return the results that match the 'x' post type.\n"
] | [
0
] | [] | [] | [
"database",
"mysql",
"sql",
"wordpress"
] | stackoverflow_0074556625_database_mysql_sql_wordpress.txt |
Q:
What is the least effort way to hard code a large amount of fixed data into source code of an embedded C program?
We have a recurring use case:
we have a large amount of fixed, unchanging data that needs to be used in a bare metal (no OS) program for verification of the silicon we are taping out.
because it is bare metal, we have no file system
We are just doing #defines right now, manually entering the data into the source code
there is a lot of data
this is verification, so there are no code style concerns. We just need the least (human programming) effort way to get the data into a binary so that the binary can put it into DRAM during an automated test run
Does anyone have any ideas?
A:
Use a table. A static global table that you generate by a program at build time. That’s one way. Probably this will give you more ideas on how to proceed.
| What is the least effort way to hard code a large amount of fixed data into source code of an embedded C program? | We have a recurring use case:
we have a large amount of fixed, unchanging data that needs to be used in a bare metal (no OS) program for verification of the silicon we are taping out.
because it is bare metal, we have no file system
We are just doing #defines right now, manually entering the data into the source code
there is a lot of data
this is verification, so there are no code style concerns. We just need the least (human programming) effort way to get the data into a binary so that the binary can put it into DRAM during an automated test run
Does anyone have any ideas?
| [
"Use a table. A static global table that you generate by a program at build time. That’s one way. Probably this will give you more ideas on how to proceed.\n"
] | [
0
] | [] | [] | [
"c",
"testing"
] | stackoverflow_0074673733_c_testing.txt |
Q:
Flutter: possible to detect when a drawer is open?
Is it possible to detect when a Drawer is open so that we can run some routine to update its content?
A typical use case I have would be to display the number of followers, likers... and for this, I would need to poll the server to get this information, then to display it.
I tried to implement a NavigatorObserver to catch the moment when the Drawer is made visible/hidden but the NavigatorObserver does not detect anything about the Drawer.
Here is the code linked to the NavigatorObserver:
import 'package:flutter/material.dart';
typedef void OnObservation(Route<dynamic> route, Route<dynamic> previousRoute);
typedef void OnStartGesture();
class NavigationObserver extends NavigatorObserver {
OnObservation onPushed;
OnObservation onPopped;
OnObservation onRemoved;
OnObservation onReplaced;
OnStartGesture onStartGesture;
@override
void didPush(Route<dynamic> route, Route<dynamic> previousRoute) {
if (onPushed != null) {
onPushed(route, previousRoute);
}
}
@override
void didPop(Route<dynamic> route, Route<dynamic> previousRoute) {
if (onPopped != null) {
onPopped(route, previousRoute);
}
}
@override
void didRemove(Route<dynamic> route, Route<dynamic> previousRoute) {
if (onRemoved != null)
onRemoved(route, previousRoute);
}
@override
void didReplace({ Route<dynamic> oldRoute, Route<dynamic> newRoute }) {
if (onReplaced != null)
onReplaced(newRoute, oldRoute);
}
@override
void didStartUserGesture() {
if (onStartGesture != null){
onStartGesture();
}
}
}
and the initialization of this observer
void main(){
runApp(new MyApp());
}
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => new _MyAppState();
}
class _MyAppState extends State<MyApp> {
final NavigationObserver _observer = new NavigationObserver()
..onPushed = (Route<dynamic> route, Route<dynamic> previousRoute) {
print('** pushed route: $route');
}
..onPopped = (Route<dynamic> route, Route<dynamic> previousRoute) {
print('** poped route: $route');
}
..onReplaced = (Route<dynamic> route, Route<dynamic> previousRoute) {
print('** replaced route: $route');
}
..onStartGesture = () {
print('** on start gesture');
};
@override
void initState(){
super.initState();
}
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Title',
theme: new ThemeData(
primarySwatch: Colors.blue,
),
home: new SplashScreen(),
routes: <String, WidgetBuilder> {
'/splashscreen': (BuildContext context) => new SplashScreen(),
},
navigatorObservers: <NavigationObserver>[_observer],
);
}
}
Thanks for your help.
A:
This answer is old now. Please see @dees91's answer.
Detecting & Running Functions When Drawer Is Opened / Closed
Run initState() when open drawer by any action.
Run dispose() when close drawer by any action.
class MyDrawer extends StatefulWidget {
@override
_MyDrawerState createState() => _MyDrawerState();
}
class _MyDrawerState extends State<MyDrawer> {
@override
void initState() {
super.initState();
print("open");
}
@override
void dispose() {
print("close");
super.dispose();
}
@override
Widget build(BuildContext context) {
return Drawer(
child: Column(
children: <Widget>[
Text("test1"),
Text("test2"),
Text("test3"),
],
),
);
}
}
State Management Considerations
If you are altering state with these functions to rebuild drawer items, you may encounter the error: Unhandled Exception: setState() or markNeedsBuild() called during build.
This can be handled by using the following two functions in initState() source
Option 1
WidgetsBinding.instance.addPostFrameCallback((_){
// Add Your Code here.
});
Option 2
SchedulerBinding.instance.addPostFrameCallback((_) {
// add your code here.
});
Full Example of Option 1
@override
void initState() {
super.initState();
WidgetsBinding.instance.addPostFrameCallback((_) {
// Your Code Here
});
}
A:
As https://github.com/flutter/flutter/pull/67249 is already merged and published with Flutter 2.0 here is proper way to detect drawer open/close:
Scaffold(
onDrawerChanged: (isOpened) {
//todo what you need for left drawer
},
onEndDrawerChanged: (isOpened) {
//todo what you need for right drawer
},
)
A:
Best solution
ScaffoldState has a useful method isDrawerOpen which provides the status of open/close.
Example: Here on the back press, it first checks if the drawer is open, if yes then first it will close before exit.
/// create a key for the scaffold in order to access it later.
GlobalKey<ScaffoldState> _scaffoldKey = GlobalKey<ScaffoldState>();
@override
Widget build(context) {
return WillPopScope(
child: Scaffold(
// assign key (important)
key: _scaffoldKey,
drawer: SideNavigation(),
onWillPop: () async {
// drawer is open then first close it
if (_scaffoldKey.currentState.isDrawerOpen) {
Navigator.of(context).pop();
return false;
}
// we can now close the app.
return true;
});}
A:
I think one simple solution is to override the leading property of your AppBar so you can have access when the menu icon is pressed an run your API calls based on that.
Yet I may have misunderstood your question because with the use case you provided, you usually need to manage it in a way that you can listen to any change which will update the value automatically so I am not sure what are you trying to trigger when the drawer is open.
Anyway here is the example.
class DrawerExample extends StatefulWidget {
@override
_DrawerExampleState createState() => new _DrawerExampleState();
}
class _DrawerExampleState extends State<DrawerExample> {
GlobalKey<ScaffoldState> _key = new GlobalKey<ScaffoldState>();
int _counter =0;
_handleDrawer(){
_key.currentState.openDrawer();
setState(() {
///DO MY API CALLS
_counter++;
});
}
@override
Widget build(BuildContext context) {
return new Scaffold(
key: _key,
appBar: new AppBar(
title: new Text("Drawer Example"),
centerTitle: true,
leading: new IconButton(icon: new Icon(
Icons.menu
),onPressed:_handleDrawer,),
),
drawer: new Drawer(
child: new Center(
child: new Text(_counter.toString(),style: Theme.of(context).textTheme.display1,),
),
),
);
}
}
A:
You can simply use onDrawerChanged for detecting if the drawer is opened or closed in the Scaffold widget.
Property :
{void Function(bool)? onDrawerChanged}
Type: void Function(bool)?
Optional callback that is called when the Scaffold.drawer is opened or closed.
Example :
@override
Widget build(BuildContext context) {
return Scaffold(
onDrawerChanged:(val){
if(val){
setState(() {
//foo bar;
});
}else{
setState(() {
//foo bar;
});
}
},
drawer: Drawer(
child: Container(
)
));
}
A:
Unfortunately, at the moment there is no readymade solution.
You can use the dirty hack for this: to observe the visible position of the Drawer.
For example, I used this approach to synchronise the animation of the icon on the button and the location of the Drawer box.
The code that solves this problem you can see below:
import 'package:flutter/material.dart';
import 'package:flutter/scheduler.dart';
class DrawerListener extends StatefulWidget {
final Widget child;
final ValueChanged<FractionalOffset> onPositionChange;
DrawerListener({
@required this.child,
this.onPositionChange,
});
@override
_DrawerListenerState createState() => _DrawerListenerState();
}
class _DrawerListenerState extends State<DrawerListener> {
GlobalKey _drawerKey = GlobalKey();
int taskID;
Offset currentOffset;
@override
void initState() {
super.initState();
_postTask();
}
_postTask() {
taskID = SchedulerBinding.instance.scheduleFrameCallback((_) {
if (widget.onPositionChange != null) {
final RenderBox box = _drawerKey.currentContext?.findRenderObject();
if (box != null) {
Offset newOffset = box.globalToLocal(Offset.zero);
if (newOffset != currentOffset) {
currentOffset = newOffset;
widget.onPositionChange(
FractionalOffset.fromOffsetAndRect(
currentOffset,
Rect.fromLTRB(0, 0, box.size.width, box.size.height),
),
);
}
}
}
_postTask();
});
}
@override
void dispose() {
SchedulerBinding.instance.cancelFrameCallbackWithId(taskID);
if (widget.onPositionChange != null) {
widget.onPositionChange(FractionalOffset(1.0, 0));
}
super.dispose();
}
@override
Widget build(BuildContext context) {
return Container(
key: _drawerKey,
child: widget.child,
);
}
}
If you are only interested in the final events of opening or closing the box, it is enough to call the callbacks in initState and dispose functions.
A:
there is isDrawerOpen property in ScaffoldState so you can check whenever you want to check.
create a global key ;
GlobalKey<ScaffoldState> scaffoldKey = GlobalKey<ScaffoldState>();
assign it to scaffold
Scaffold(
key: scaffoldKey,
appBar: ..)
check where ever in the app
bool opened =scaffoldKey.currentState.isDrawerOpen;
A:
By the time this question was being posted it was a bit trick to accomplish this. But from Flutter 2.0, it is pretty easy. Inside your Scaffold you can detect both the right drawer and the left drawer as follows.
@override
Widget build(BuildContext context) {
return Scaffold(
onDrawerChanged: (isOpened) {
*//Left drawer, Your code here,*
},
onEndDrawerChanged: (isOpened) {
*//Right drawer, Your code here,*
},
);
}
A:
You can use Scaffold.of(context) as below to detect the Drawer status :
NOTE: you must put your code in the Builder widget to use the context which contains scaffold.
Builder(
builder: (context) => IconButton(
icon: Icon(
Icons.menu,
color: getColor(context, opacity.value),
),
onPressed: () {
if (Scaffold.of(context).isDrawerOpen) {
Scaffold.of(context).closeDrawer();
} else {
Scaffold.of(context).openDrawer();
}
},
),
),
| Flutter: possible to detect when a drawer is open? | Is it possible to detect when a Drawer is open so that we can run some routine to update its content?
A typical use case I have would be to display the number of followers, likers... and for this, I would need to poll the server to get this information, then to display it.
I tried to implement a NavigatorObserver to catch the moment when the Drawer is made visible/hidden but the NavigatorObserver does not detect anything about the Drawer.
Here is the code linked to the NavigatorObserver:
import 'package:flutter/material.dart';
typedef void OnObservation(Route<dynamic> route, Route<dynamic> previousRoute);
typedef void OnStartGesture();
class NavigationObserver extends NavigatorObserver {
OnObservation onPushed;
OnObservation onPopped;
OnObservation onRemoved;
OnObservation onReplaced;
OnStartGesture onStartGesture;
@override
void didPush(Route<dynamic> route, Route<dynamic> previousRoute) {
if (onPushed != null) {
onPushed(route, previousRoute);
}
}
@override
void didPop(Route<dynamic> route, Route<dynamic> previousRoute) {
if (onPopped != null) {
onPopped(route, previousRoute);
}
}
@override
void didRemove(Route<dynamic> route, Route<dynamic> previousRoute) {
if (onRemoved != null)
onRemoved(route, previousRoute);
}
@override
void didReplace({ Route<dynamic> oldRoute, Route<dynamic> newRoute }) {
if (onReplaced != null)
onReplaced(newRoute, oldRoute);
}
@override
void didStartUserGesture() {
if (onStartGesture != null){
onStartGesture();
}
}
}
and the initialization of this observer
void main(){
runApp(new MyApp());
}
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => new _MyAppState();
}
class _MyAppState extends State<MyApp> {
final NavigationObserver _observer = new NavigationObserver()
..onPushed = (Route<dynamic> route, Route<dynamic> previousRoute) {
print('** pushed route: $route');
}
..onPopped = (Route<dynamic> route, Route<dynamic> previousRoute) {
print('** poped route: $route');
}
..onReplaced = (Route<dynamic> route, Route<dynamic> previousRoute) {
print('** replaced route: $route');
}
..onStartGesture = () {
print('** on start gesture');
};
@override
void initState(){
super.initState();
}
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Title',
theme: new ThemeData(
primarySwatch: Colors.blue,
),
home: new SplashScreen(),
routes: <String, WidgetBuilder> {
'/splashscreen': (BuildContext context) => new SplashScreen(),
},
navigatorObservers: <NavigationObserver>[_observer],
);
}
}
Thanks for your help.
| [
"This answer is old now. Please see @dees91's answer.\nDetecting & Running Functions When Drawer Is Opened / Closed\n\nRun initState() when open drawer by any action.\nRun dispose() when close drawer by any action.\n\nclass MyDrawer extends StatefulWidget {\n @override\n _MyDrawerState createState() => _MyDrawerState();\n}\n\nclass _MyDrawerState extends State<MyDrawer> {\n\n @override\n void initState() {\n super.initState();\n print(\"open\");\n }\n\n @override\n void dispose() {\n print(\"close\");\n super.dispose();\n }\n\n @override\n Widget build(BuildContext context) {\n return Drawer(\n child: Column(\n children: <Widget>[\n Text(\"test1\"),\n Text(\"test2\"),\n Text(\"test3\"),\n ],\n ),\n );\n }\n}\n\nState Management Considerations\nIf you are altering state with these functions to rebuild drawer items, you may encounter the error: Unhandled Exception: setState() or markNeedsBuild() called during build.\nThis can be handled by using the following two functions in initState() source\nOption 1\nWidgetsBinding.instance.addPostFrameCallback((_){\n // Add Your Code here.\n});\n\nOption 2\nSchedulerBinding.instance.addPostFrameCallback((_) {\n // add your code here.\n});\n\nFull Example of Option 1\n@override\nvoid initState() {\n super.initState();\n WidgetsBinding.instance.addPostFrameCallback((_) {\n // Your Code Here\n });\n}\n\n",
"As https://github.com/flutter/flutter/pull/67249 is already merged and published with Flutter 2.0 here is proper way to detect drawer open/close:\nScaffold(\n onDrawerChanged: (isOpened) {\n //todo what you need for left drawer\n },\n onEndDrawerChanged: (isOpened) {\n //todo what you need for right drawer\n },\n)\n\n",
"Best solution\nScaffoldState has a useful method isDrawerOpen which provides the status of open/close.\nExample: Here on the back press, it first checks if the drawer is open, if yes then first it will close before exit.\n/// create a key for the scaffold in order to access it later.\nGlobalKey<ScaffoldState> _scaffoldKey = GlobalKey<ScaffoldState>();\n\n@override\nWidget build(context) {\n return WillPopScope(\n child: Scaffold(\n // assign key (important)\n key: _scaffoldKey,\n drawer: SideNavigation(),\n onWillPop: () async {\n // drawer is open then first close it\n if (_scaffoldKey.currentState.isDrawerOpen) {\n Navigator.of(context).pop();\n return false;\n }\n // we can now close the app.\n return true;\n });}\n\n",
"I think one simple solution is to override the leading property of your AppBar so you can have access when the menu icon is pressed an run your API calls based on that.\nYet I may have misunderstood your question because with the use case you provided, you usually need to manage it in a way that you can listen to any change which will update the value automatically so I am not sure what are you trying to trigger when the drawer is open.\nAnyway here is the example.\n\nclass DrawerExample extends StatefulWidget {\n @override\n _DrawerExampleState createState() => new _DrawerExampleState();\n}\n\nclass _DrawerExampleState extends State<DrawerExample> {\n GlobalKey<ScaffoldState> _key = new GlobalKey<ScaffoldState>();\n int _counter =0;\n _handleDrawer(){\n _key.currentState.openDrawer();\n\n setState(() {\n ///DO MY API CALLS\n _counter++;\n });\n\n }\n @override\n Widget build(BuildContext context) {\n return new Scaffold(\n key: _key,\n appBar: new AppBar(\n title: new Text(\"Drawer Example\"),\n centerTitle: true,\n leading: new IconButton(icon: new Icon(\n Icons.menu\n ),onPressed:_handleDrawer,),\n ),\n drawer: new Drawer(\n child: new Center(\n child: new Text(_counter.toString(),style: Theme.of(context).textTheme.display1,),\n ),\n ),\n );\n }\n}\n\n",
"You can simply use onDrawerChanged for detecting if the drawer is opened or closed in the Scaffold widget.\nProperty :\n{void Function(bool)? onDrawerChanged}\nType: void Function(bool)?\nOptional callback that is called when the Scaffold.drawer is opened or closed.\nExample :\n@override\nWidget build(BuildContext context) {\nreturn Scaffold(\n onDrawerChanged:(val){\n if(val){\n setState(() {\n //foo bar;\n });\n }else{\n setState(() {\n //foo bar;\n });\n }\n}, \n drawer: Drawer( \n child: Container(\n )\n ));\n\n}\n",
"Unfortunately, at the moment there is no readymade solution.\nYou can use the dirty hack for this: to observe the visible position of the Drawer.\nFor example, I used this approach to synchronise the animation of the icon on the button and the location of the Drawer box.\n\nThe code that solves this problem you can see below:\n import 'package:flutter/material.dart';\n import 'package:flutter/scheduler.dart';\n\n class DrawerListener extends StatefulWidget {\n final Widget child;\n final ValueChanged<FractionalOffset> onPositionChange;\n\n DrawerListener({\n @required this.child,\n this.onPositionChange,\n });\n\n @override\n _DrawerListenerState createState() => _DrawerListenerState();\n }\n\n class _DrawerListenerState extends State<DrawerListener> {\n GlobalKey _drawerKey = GlobalKey();\n int taskID;\n Offset currentOffset;\n\n @override\n void initState() {\n super.initState();\n _postTask();\n }\n\n _postTask() {\n taskID = SchedulerBinding.instance.scheduleFrameCallback((_) {\n if (widget.onPositionChange != null) {\n final RenderBox box = _drawerKey.currentContext?.findRenderObject();\n if (box != null) {\n Offset newOffset = box.globalToLocal(Offset.zero);\n if (newOffset != currentOffset) {\n currentOffset = newOffset;\n widget.onPositionChange(\n FractionalOffset.fromOffsetAndRect(\n currentOffset,\n Rect.fromLTRB(0, 0, box.size.width, box.size.height),\n ),\n );\n }\n }\n }\n\n _postTask();\n });\n }\n\n @override\n void dispose() {\n SchedulerBinding.instance.cancelFrameCallbackWithId(taskID);\n if (widget.onPositionChange != null) {\n widget.onPositionChange(FractionalOffset(1.0, 0));\n }\n super.dispose();\n }\n\n @override\n Widget build(BuildContext context) {\n return Container(\n key: _drawerKey,\n child: widget.child,\n );\n }\n }\n\nIf you are only interested in the final events of opening or closing the box, it is enough to call the callbacks in initState and dispose functions.\n",
"there is isDrawerOpen property in ScaffoldState so you can check whenever you want to check.\ncreate a global key ;\nGlobalKey<ScaffoldState> scaffoldKey = GlobalKey<ScaffoldState>();\n\nassign it to scaffold\nScaffold(\n key: scaffoldKey,\n appBar: ..)\n\ncheck where ever in the app\nbool opened =scaffoldKey.currentState.isDrawerOpen;\n\n",
"By the time this question was being posted it was a bit trick to accomplish this. But from Flutter 2.0, it is pretty easy. Inside your Scaffold you can detect both the right drawer and the left drawer as follows.\n@override\n Widget build(BuildContext context) {\n return Scaffold(\n onDrawerChanged: (isOpened) {\n *//Left drawer, Your code here,*\n },\n onEndDrawerChanged: (isOpened) {\n *//Right drawer, Your code here,*\n },\n );\n }\n\n",
"You can use Scaffold.of(context) as below to detect the Drawer status :\nNOTE: you must put your code in the Builder widget to use the context which contains scaffold.\nBuilder(\n builder: (context) => IconButton(\n icon: Icon(\n Icons.menu,\n color: getColor(context, opacity.value),\n ),\n onPressed: () {\n if (Scaffold.of(context).isDrawerOpen) {\n Scaffold.of(context).closeDrawer();\n } else {\n Scaffold.of(context).openDrawer();\n }\n },\n ),\n ),\n\n"
] | [
38,
26,
19,
8,
6,
4,
0,
0,
0
] | [] | [] | [
"flutter"
] | stackoverflow_0049965209_flutter.txt |
Q:
Power Bi DAX: Count rows dynamically and aggregate
I have this sample table:
"Running Total" is a MEASURE (NOT a column), and I need to change this measure such that it works when the date column is filtered.
Current code for "Running Total", which generates the above output:
Issue with the code: It does not work when the "date" column is filtered using a slicer.
I need this output when the date filter is set to 2018-01-01-2023-01-01 for example:
As you can see, 2017 dates are removed, therefore the "Running Total" measure is adjusted accordingly.
How to achieve this?
A:
Replace ALL() with ALLSELECTED()
| Power Bi DAX: Count rows dynamically and aggregate | I have this sample table:
"Running Total" is a MEASURE (NOT a column), and I need to change this measure such that it works when the date column is filtered.
Current code for "Running Total", which generates the above output:
Issue with the code: It does not work when the "date" column is filtered using a slicer.
I need this output when the date filter is set to 2018-01-01-2023-01-01 for example:
As you can see, 2017 dates are removed, therefore the "Running Total" measure is adjusted accordingly.
How to achieve this?
| [
"Replace ALL() with ALLSELECTED()\n"
] | [
1
] | [] | [] | [
"dax",
"powerbi"
] | stackoverflow_0074677044_dax_powerbi.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.