Id
int64
1.68k
75.6M
PostTypeId
int64
1
2
AcceptedAnswerId
int64
1.7k
75.6M
ParentId
int64
1.68k
75.6M
Score
int64
-60
3.16k
ViewCount
int64
8
2.68M
Body
stringlengths
1
41.1k
Title
stringlengths
14
150
ContentLicense
stringclasses
3 values
FavoriteCount
int64
0
1
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
int64
-1
21.3M
OwnerUserId
int64
1
21.3M
Tags
sequence
74,170,638
2
null
53,869,118
0
null
In my case, adding the project as maven project helped .
null
CC BY-SA 4.0
null
2022-10-23T11:05:43.457
2022-10-23T12:14:16.623
2022-10-23T12:14:16.623
20,313,985
20,313,985
null
74,171,732
2
null
47,945,008
0
null
I encountered this problem on an Azure WebApp running on Java 11 and Tomcat 9.0. I changed the Java Web Server version from to , and then the server worked. [](https://i.stack.imgur.com/GzUKf.png)
null
CC BY-SA 4.0
null
2022-10-23T14:02:17.080
2022-10-23T14:02:17.080
null
null
757,661
null
74,172,050
2
null
74,171,972
5
null
You are creating a new `Text` on every button click, and adding this new `Text` into `rootM` every time. When you say `mText.setText(null);` you are clearing the text of the newly created `mText` variable, which is different from the one already added to the layout. To overcome this, you can define the `Text` variable once outside the event handler, and just update its text when needed. ``` final Text mText = new Text(); rootM.getChildren().add(mText); mText.setLayoutX(300); mText.setLayoutY(200); mText.setFont(Font.font("Verdana",50)); buttonNew.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent arg0) { // your code to generate random ... mText.setText(randomM1 + " x " + randomM2); } }); ```
null
CC BY-SA 4.0
null
2022-10-23T14:45:28.997
2022-10-23T14:45:28.997
null
null
3,811,895
null
74,172,374
2
null
74,171,179
0
null
Please, test the next updated code. It uses only a unique dictionary which keeps fruits name as (unique) keys and the concatenated existing data in an array. It returns without 'Match ID`, which is useless, since the dictionary key/unique fruit is enough: ``` Sub CombineData() Const sName As String = "Test", sDelimiter As String = ", " Const dName As String = "Test2", dFirstCellAddress As String = "A2" ' Source range to an array. Dim Data, rCount As Long, arrHead With ThisWorkbook.Worksheets(sName).Range("A1").CurrentRegion rCount = .rows.count - 1 If rCount < 1 Then Exit Sub ' no data or only headers Data = .Resize(rCount).Offset(1).Value2 arrHead = .rows(1).Offset(, 1).Value2 'place the relevant header in an array End With ' Array to a dictionary: Dim dict As Object: Set dict = CreateObject("Scripting.Dictionary") dict.CompareMode = vbTextCompare Dim r As Long, arrIt, arrFin, i As Long For r = 1 To rCount If Not dict.Exists(Data(r, 2)) Then dict.Add Data(r, 2), Array(Data(r, 3), Data(r, 4), Data(r, 5)) Else arrIt = dict(Data(r, 2)) 'an array item cannot be updated directly For i = 0 To UBound(arrIt) arrIt(i) = arrIt(i) & sDelimiter & Data(r, i + 2) Next i dict(Data(r, 2)) = arrIt End If Next r If dict.count = 0 Then Exit Sub ReDim arrFin(1 To dict.count, 1 To 4) ' Dictionary to final array. Dim Key As Variant r = 0 For Each Key In dict.Keys r = r + 1 arrFin(r, 1) = Key: arrIt = dict(Key) For i = 0 To UBound(arrIt) arrFin(r, i + 2) = arrIt(i) Next i Next Key ' drop the process array content at once: With ThisWorkbook.Worksheets(dName).Range(dFirstCellAddress).Resize(UBound(arrFin), 4) .Value = arrFin .cells(1).Offset(-1).Resize(1, UBound(arrHead, 2)).Value2 = arrHead .EntireColumn.AutoFit End With MsgBox "Data combined.", vbInformation End Sub ```
null
CC BY-SA 4.0
null
2022-10-23T15:26:43.110
2022-10-23T16:16:11.933
2022-10-23T16:16:11.933
2,233,308
2,233,308
null
74,173,205
2
null
74,168,389
1
null
Canadian provinces are not part of `world_110m` map in the example gallery. You would need to provide your own geojson and topojson file that contains that information in order to work with Altair and then follow the guidelines here [How can I make a map using GeoJSON data in Altair?](https://stackoverflow.com/questions/55923300/how-can-i-make-a-map-using-geojson-data-in-altair). You can also work with geopandas together with Altair, which in many ways is more flexible. We are working on integrating info on this into the docs, but in the meantime you can view this preview version to get started [https://deploy-preview-1--spontaneous-sorbet-49ed10.netlify.app/user_guide/marks/geoshape.html](https://deploy-preview-1--spontaneous-sorbet-49ed10.netlify.app/user_guide/marks/geoshape.html)
null
CC BY-SA 4.0
null
2022-10-23T17:14:28.347
2022-10-23T17:14:28.347
null
null
2,166,823
null
74,174,034
2
null
74,152,059
0
null
``` plt.scatter(x,y) z=np.polyfit(x,y,1) p=np.poly1d(z) z2=np.polyfit(x,y,2) p2=np.poly1d(z2) pd.options.display.float_format = '{:,.3f}'.format plt.plot(x, y , 'go') plt.plot(x, p(x),'r') plt.plot(x,p2(x)) for x1,y1 in zip(x,y): label = '{:,.3f}'.format(y1) plt.annotate(label, # this is the text (x1,y1), # these are the coordinates to position the label textcoords="offset points", # how to position the text xytext=(1,4), # distance from text to points (x,y) ha='center') # horizontal alignment can be left, right or center plt.show() ```
null
CC BY-SA 4.0
null
2022-10-23T19:10:26.337
2022-10-23T19:12:29.123
2022-10-23T19:12:29.123
19,736,121
19,736,121
null
74,174,100
2
null
30,233,476
0
null
I assume that what you want is that lines can't break between the numbers and the following words. How you have it in the snippet works for me. If there is no space between the span and the next word, the line can't break between them. If you really want to have a NBSP there, you can have it with something like `.number::after {content: "\A0";}` in the following snippet. ``` .number { vertical-align: super; } .number::after { content: "\A0"; } body { width: 11em; } ``` ``` <span class="number">1</span>Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Phasellus nec tincidunt erat. <span class="number">2</span>Integer lectus nulla, imperdiet semper pharetra nec, hendrerit ac tortor. Sed nec gravida enim. Quisque quis condimentum odio. <span class="number">3</span>Quisque at sodales arcu. Aenean sapien nibh, faucibus in est id, tempus pellentesque purus. Nulla imperdiet, sem eu pellentesque pretium, justo quam scelerisque neque, vel accumsan dui enim non elit. ``` I didn't find that this behavior is guaranteed, so, to be sure, you can wrap the number with the first word together with an element with `white-space: nowrap;` in CSS. That works even if there is space between the number and the word, like there: ``` no-br { white-space: nowrap; } .number { vertical-align: super; } body { width: 11em; } ``` ``` <no-br><span class="number">1</span> Orci</no-br> varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Phasellus nec tincidunt erat. <no-br><span class="number">2</span> Integer</no-br> lectus nulla, imperdiet semper pharetra nec, hendrerit ac tortor. Sed nec gravida enim. Quisque quis condimentum odio. <no-br><span class="number">3</span> Quisque</no-br> at sodales arcu. Aenean sapien nibh, faucibus in est id, tempus pellentesque purus. Nulla imperdiet, sem eu pellentesque pretium, justo quam scelerisque neque, vel accumsan dui enim non elit. ``` I set the width of the text in the snippets so that the changes show to me. It may render differently to you, so here is an image of how they are supposed to look: [](https://i.stack.imgur.com/Mu7dC.png)
null
CC BY-SA 4.0
null
2022-10-23T19:18:48.403
2022-10-23T19:18:48.403
null
null
7,281,855
null
74,174,263
2
null
74,100,091
0
null
You could achieve this in multiple ways. Depends on the exact desired purpose. One notable and easy example would be to use some trickery You could put the main textblock and a rectangle in a horizontally aligned stackpanel with a row span of 2 and with a higher Z index than the secondary textblock. You set the rectangle to have the height equal with the sum of the two textblocks and the width must be set by you after your desired specifications. The two textblocks must have set a fixed height, and the textblock that is inside the stackpanel with the rectangle must be aligned to the top vertically inside the stackpanel. The result, if implemented correctly, should be a rectangle that is covering the part of the secondary textblock that is making the secondary textblock bigger than the main textblock.
null
CC BY-SA 4.0
null
2022-10-23T19:41:06.763
2023-01-31T17:30:50.083
2023-01-31T17:30:50.083
13,302
16,587,692
null
74,174,290
2
null
29,405,689
0
null
I solved the issue with that command: ``` sudo ln -s /usr/lib/x86_64-linux-gnu/libxcb-util.so.0 /usr/lib/x86_64-linux-gnu/libxcb-util.so.1 ``` My python: 3.7.5 Package: PyQt5 Os: Linux(Debian)
null
CC BY-SA 4.0
null
2022-10-23T19:46:36.480
2022-10-23T19:46:36.480
null
null
15,199,047
null
74,174,324
2
null
74,174,266
0
null
In the screenshot you provided, it looks like Python might be corrupted or pygame might be corrupted. Try running `pip install --upgrade --force-reinstall pygame`, or `pip3 install --upgrade --force-reinstall pygame`. If this doesn't work, you should reinstall Python using [brew](https://brew.sh).
null
CC BY-SA 4.0
null
2022-10-23T19:54:01.733
2022-10-23T19:54:01.733
null
null
15,342,452
null
74,174,850
2
null
41,823,702
0
null
Android studio->Git->Manage Remotes... And delet all from there with -
null
CC BY-SA 4.0
null
2022-10-23T21:13:58.433
2022-10-23T21:13:58.433
null
null
13,736,807
null
74,175,009
2
null
74,146,920
0
null
It turned out I didn't have my php-cgi.exe file mapped accurately in IIS. I had to edit my Handler Mappings and link the FastCgiModule handler to my current installment of PHP, mapping it to the php-cgi.exe file.
null
CC BY-SA 4.0
null
2022-10-23T21:39:50.007
2022-11-01T19:55:25.987
2022-11-01T19:55:25.987
1,143,891
1,143,891
null
74,175,025
2
null
74,169,933
4
null
I didn't use and I test this way and it shows me the Video: I add this code in MainWindow: ``` QWidget *wgt = new QWidget(this); QGridLayout *gridLayout = new QGridLayout(wgt); wgt->setStyleSheet(QString::fromUtf8("border: 1px solid black;\n" "border-radius: 25px;background-color:black;")); ui->centralwidget->layout()->addWidget(wgt); QWebEngineView* view = new QWebEngineView(wgt); view->setWindowTitle("Qt 6 - The Ultimate UX Development Platform"); view->setUrl(QUrl("https://www.youtube.com/embed/TodEc77i4t4")); view->settings()->setAttribute(QWebEngineSettings::PluginsEnabled, true); view->settings()->setAttribute(QWebEngineSettings::FullScreenSupportEnabled, true); view->settings()->setAttribute(QWebEngineSettings::AllowRunningInsecureContent, true); view->settings()->setAttribute(QWebEngineSettings::SpatialNavigationEnabled, true); view->settings()->setAttribute(QWebEngineSettings::JavascriptEnabled, true); view->settings()->setAttribute(QWebEngineSettings::JavascriptCanOpenWindows, true); wgt->layout()->addWidget(view); ``` [](https://i.stack.imgur.com/PenVX.png) For Style your widget please look at [Qt Style Sheets Reference](https://doc.qt.io/qt-5/stylesheet-reference.html), it's similar to CSS and you can use this. for example, you can add this: ``` view->setStyleSheet(QString::fromUtf8("border: 1px solid black;\n" "border-radius: 25px;")); ``` QWebEngineView object didn't get style so I add it inside the helper widget and add style to that. you can clone it from here : [https://gitlab.com/ParisaHR/stackoverflow_qwebengineview](https://gitlab.com/ParisaHR/stackoverflow_qwebengineview)
null
CC BY-SA 4.0
null
2022-10-23T21:42:41.840
2022-10-23T22:28:51.513
2022-10-23T22:28:51.513
9,484,913
9,484,913
null
74,175,065
2
null
25,272,603
0
null
I got this error working with `Microsoft.Web.Administration.ServerManager`. It turned out to be because I was calling `appPool.Start()` after calling `serverManager.CommitChanges()`
null
CC BY-SA 4.0
null
2022-10-23T21:51:27.800
2022-10-23T21:51:27.800
null
null
799,936
null
74,175,278
2
null
74,154,441
0
null
I had some trouble getting your page to work after looking at [https://afeld.github.io/bootstrap-toc/#headings](https://afeld.github.io/bootstrap-toc/#headings). On your sample page, the "Recon" heading is an `<h2>`, and the subordinated headings are `<h3>`. Your page contains an h1, then h2 and h2, h2, and finally an h3. I used this page content for testing (with some longer lorem ipsum) texts between the headings: ``` h2 h3 h3 h2 h3 h3 h3 h3 h2 ``` [Screenshot from my 2022-10-21-Toc-Test.md page](https://i.stack.imgur.com/isQu7.png) I have the following part (from the _layout.scss) to have the correct distances (using the new CSS below): ``` nav[data-toggle="toc"] { top: 42px; } ``` I have this new CSS (to the main.css): ``` .sticky-top { position: sticky; top: 100px; align-self: flex-start; } ``` The idea is from [My position: sticky element isn't sticky when using flexbox](https://stackoverflow.com/questions/44446671/my-position-sticky-element-isnt-sticky-when-using-flexbox). Note: I have not edited the margins/distances of the hover effects to match the screenshot but I am sure you will find it out.
null
CC BY-SA 4.0
null
2022-10-23T22:37:04.473
2022-10-24T00:34:05.130
2022-10-24T00:34:05.130
3,842,598
3,842,598
null
74,175,281
2
null
74,173,318
2
null
add `border-style:none;` where you don't want to see the border. for example : ``` QComboBox#comboBoxName{ border-style:none; } ``` I also Try this in your code and this is its result: ``` QComboBox::item { width: 35px; height:35px; border-style:none; } ``` [](https://i.stack.imgur.com/xUxyl.png)
null
CC BY-SA 4.0
null
2022-10-23T22:37:47.317
2022-10-24T00:43:24.773
2022-10-24T00:43:24.773
9,484,913
9,484,913
null
74,175,582
2
null
74,168,389
0
null
Looks like you got your data from [here](https://opendata.vancouver.ca/explore/dataset/street-trees/export/?disjunctive.species_name&disjunctive.common_name&disjunctive.height_range_id&dataChart=eyJxdWVyaWVzIjpbeyJjb25maWciOnsiZGF0YXNldCI6InN0cmVldC10cmVlcyIsIm9wdGlvbnMiOnsiZGlzanVuY3RpdmUuc3BlY2llc19uYW1lIjp0cnVlLCJkaXNqdW5jdGl2ZS5jb21tb25fbmFtZSI6dHJ1ZSwiZGlzanVuY3RpdmUuaGVpZ2h0X3JhbmdlX2lkIjp0cnVlLCJsb2NhdGlvbiI6IjksNDkuMDI3OTYsLTEyMi40NzI4NCJ9fSwiY2hhcnRzIjpbeyJhbGlnbk1vbnRoIjp0cnVlLCJ0eXBlIjoibGluZSIsImZ1bmMiOiJBVkciLCJ5QXhpcyI6ImhlaWdodF9yYW5nZV9pZCIsInNjaWVudGlmaWNEaXNwbGF5Ijp0cnVlLCJjb2xvciI6IiMwMjc5QjEifV0sInhBeGlzIjoiZGF0ZV9wbGFudGVkIiwibWF4cG9pbnRzIjoiIiwidGltZXNjYWxlIjoieWVhciIsInNvcnQiOiIifV0sImRpc3BsYXlMZWdlbmQiOnRydWUsImFsaWduTW9udGgiOnRydWV9&utm_source=vancouver+is+awesome&utm_campaign=vancouver+is+awesome:+outbound&utm_medium=referral&disjunctive.on_street&disjunctive.neighbourhood_name&location=12,49.26411,-123.14541) ``` import pandas as pd import numpy as np import plotly.express as px #loading data df = pd.read_csv('street-trees.csv', sep=';') #extracting coords df['coords'] = df['Geom'].str.extract('\[(.*?)\]') df['lon'] = df['coords'].str.split(',').str[0].astype(float) df['lat'] = df['coords'].str.split(',').str[1].astype(float) #getting neighborhood totals df2 = pd.merge(df[['NEIGHBOURHOOD_NAME']].value_counts().reset_index(), df[['NEIGHBOURHOOD_NAME', 'lon', 'lat']].groupby('NEIGHBOURHOOD_NAME').mean().reset_index()) #drawing figure fig = px.scatter_mapbox(df2, lat='lat', lon='lon', color=0, opacity=0.5, center=dict(lon=df2['lon'].mean(), lat=df2['lat'].mean()), zoom=11, size=0) fig.update_layout(mapbox_style='open-street-map') fig.show() ``` [](https://i.stack.imgur.com/xndM6.jpg)
null
CC BY-SA 4.0
null
2022-10-23T23:54:43.377
2022-10-23T23:54:43.377
null
null
17,142,551
null
74,175,761
2
null
59,432,964
1
null
Here is my schema (using sqlite as an example) with 2 tables: ``` -- This is a list of your chart of accounts CREATE TABLE "accounts" ( "name" TEXT, "number" INTEGER, "normal" INTEGER ) -- This is a table of each transaction CREATE TABLE "transactions" ( "id" INTEGER, "date" TEXT, "amount" REAL, "account" INTEGER, "direction" INTEGER ) ``` With this convention, the `accounts.normal` and `transaction.direction` fields are set to `1` for debit and `-1` for credit. The end user never sees this but it makes arithmetic easy. When you create a journal entry, it will have at least 2 rows in the `transactions` table - a debit and a credit. They should share the same `id`. To see your balances, you can run this query: ``` select (account) as a, name, sum(amount * direction * normal) as balance from transactions left join accounts on a = accounts.number group by name order by a, name; ``` To view the ledger, you can run this: ``` select id, date, name, case when direction == 1 then amount end as DR, case when direction == -1 then amount end as CR from transactions left join accounts on account = accounts.number order by id, date, CR, DR; ``` I have a much more [detailed post](https://blog.journalize.io/posts/an-elegant-db-schema-for-double-entry-accounting/) of different queries you can run, along with example data. But, with the above two tables, you can create a working double-entry system.
null
CC BY-SA 4.0
null
2022-10-24T00:46:23.243
2022-10-24T00:46:23.243
null
null
3,788
null
74,176,209
2
null
69,862,859
0
null
As of October 2022, I tried the above, and it did not work for me. I kept messing with it and eventually my Xcode went into an infinite loading situation every time I opened the project. Luckily, all my recent changes were backed up to git. So I created a new folder , pulled the project, and everything -- including previews -- seem to be working now. Note: I do have "Open With Rosetta" on for Xcode Decided to try cloning outside of iCloud Drive thanks to this post: [https://stackoverflow.com/a/73035814](https://stackoverflow.com/a/73035814)
null
CC BY-SA 4.0
null
2022-10-24T02:32:10.217
2022-10-24T02:32:10.217
null
null
2,672,673
null
74,176,360
2
null
74,176,124
4
null
I created a test that generates index tests. It reads the field of the Cypress runner, and the list of filtered specs underneath it. Then it writes a new index spec that only runs the filtered specs. --- First, create a folder `cypress/e2e/_generated-tests`. Inside that folder, create a new spec `_generate.cy.js` ``` const filter = Cypress.$(parent.document.body) .find('div#app') .find('#inline-spec-list-header-search') .val() const specPaths = Cypress.$(parent.document.body) .find('div#app') .find('ul').eq(0) .find('li') .map((index,el) => { const text = el.innerText.replace('\n', '').replace('\\', '/') const path = Cypress.$(el).find('a').attr('href').split('?file=')[1] return { text, path } }) .filter((index, item) => item.text.endsWith('.cy.js') && !item.text.startsWith('_')) .map((index,item) => item.path) .toArray() it('', () => { const indexSpecName = filter ? `_run-[${filter}]-filter.cy.js` : '_run-all.cy.js' const content = specPaths.map(specPath => { const relativePath = specPath.replace('cypress/e2e', '') return `context('${specPath}', () => require('..${relativePath}'))` }).join('\n') cy.writeFile(`./cypress/e2e/_generated-tests/${indexSpecName}`, content) }) ``` To use it, first run the `_generate.cy.js` spec. Then filter the spec tree as required, and re-run this spec. It will create a new index spec under `_generated-tests` with a name of `_run-[searchTerm]-filter.cy.js`. This code is configured to my preferences, like spec extensions are `.cy.js` but you can adjust to suit your own requirements. --- To use `cypress run` excluding all the generated index files, add `cypress/e2e/_generated-tests/**/*` to the `excludeSpecPattern` configuration.
null
CC BY-SA 4.0
null
2022-10-24T03:09:32.173
2022-10-24T03:09:32.173
null
null
16,997,707
null
74,176,423
2
null
33,889,651
2
null
For me, I had to make a new folder again and add everything back
null
CC BY-SA 4.0
null
2022-10-24T03:25:14.177
2022-10-24T03:25:14.177
null
null
11,730,536
null
74,176,441
2
null
74,176,383
0
null
The problem line is `inverse = inverse.setdefault(val, key)`. `setdefault` returns the value, in this case its a `str`. This line sets the inverse to a `str`, not a `dict`. The second time it is run, inverse is a `str`, which does not have a setdefault method.
null
CC BY-SA 4.0
null
2022-10-24T03:28:08.480
2022-10-24T03:28:08.480
null
null
9,680,087
null
74,176,539
2
null
73,504,072
1
null
TLDR - it seems that the most recent version of Visual Studio that works correctly with the SSRS Map component is 2019 32 bit (NB: 64 bit VS 2019 has the same bug as VS 2022!!!) My understanding of the problem is as follows: Visual Studio has some IIS libraries it runs locally to render your report (recall that SSRS server requires IIS to host reports - VS is emulating this locally). There are a few references to this same error message being thrown by IIS: ["An attempt was made to load a program with an incorrect format" even when the platforms are the same](https://stackoverflow.com/questions/2023766/an-attempt-was-made-to-load-a-program-with-an-incorrect-format-even-when-the-p) The cause is a mis-match between 32-bit and 64-bit libraries. It seems that in making more of VS 2022 into 64-bit MS has broken the map component, which presumably still uses some 32-bit libraries. I saw the same error when I tried with VS 2019 64-bit. I was able to make the map component work correctly using VS 2019 32-bit. I have reported this as a bug in Visual Studio 2022: [https://developercommunity.visualstudio.com/t/Visual-Studio-2022-SQL-Server-Reporting/10179684?port=1025&fsid=5cf1bfea-5cd6-4bea-b937-236713eac465](https://developercommunity.visualstudio.com/t/Visual-Studio-2022-SQL-Server-Reporting/10179684?port=1025&fsid=5cf1bfea-5cd6-4bea-b937-236713eac465)
null
CC BY-SA 4.0
null
2022-10-24T03:47:48.870
2022-10-24T03:47:48.870
null
null
12,434,962
null
74,176,589
2
null
72,329,965
1
null
1. Install gapi-script (https://www.npmjs.com/package/gapi-script). npm install --save gapi-script 2. Use it in component. import { loadGapiInsideDOM } from "gapi-script"; ... useEffect(() => { (async () => { await loadGapiInsideDOM(); })(); });
null
CC BY-SA 4.0
null
2022-10-24T04:00:27.790
2022-10-24T04:00:27.790
null
null
12,931,404
null
74,176,802
2
null
74,176,729
1
null
You can do so by either taking the (that's inefficient) or else you can change while debugging in vs code Here is a image for reference [](https://i.stack.imgur.com/UqqmE.png) But incase you only want to check when the iterations below it(when ), you can use `if(i>=k){code}`
null
CC BY-SA 4.0
null
2022-10-24T04:50:42.797
2022-10-24T06:23:57.740
2022-10-24T06:23:57.740
16,441,984
16,441,984
null
74,177,001
2
null
74,170,089
1
null
You can use `axes = TRUE` to get automatic axes which may not be as pretty as you like. ``` library(spatstat) plot(longleaf, axes = TRUE) ``` ![](https://i.imgur.com/6mIAnJK.png) I would recommend adding the axes afterwards. E.g: ``` plot(longleaf) axis(1, pretty(longleaf$x), pos = 0) axis(4, pretty(longleaf$y), pos = 200) ``` ![](https://i.imgur.com/xybjkIn.png) If you want another legend position use `leg.side`: ``` plot(longleaf, leg.side = "right", main = "") axis(1, pretty(longleaf$x), pos = 0) axis(2, pretty(longleaf$y), pos = 0) ``` ![](https://i.imgur.com/XrP9weE.png)
null
CC BY-SA 4.0
null
2022-10-24T05:38:35.293
2022-10-24T05:38:35.293
null
null
3,341,769
null
74,177,381
2
null
74,176,729
1
null
Another option is a conditional breakpoint. Its condition can be when `i` has a specific value. [](https://i.stack.imgur.com/fLXTV.png) [](https://i.stack.imgur.com/gHvQh.png) [](https://i.stack.imgur.com/DPb8X.png) The big advantage is: you don't need to recompile your code.
null
CC BY-SA 4.0
null
2022-10-24T06:39:43.943
2022-10-24T06:39:43.943
null
null
1,137,174
null
74,177,394
2
null
74,176,729
0
null
The universal quick & dirty solution no matter debugger is to modify the code. For example: ``` if(i==0) { volatile int x = 0; // set breakpoint here } ``` It's also good practice to surround such "debug only" code with a so-called "compiler switch": `#ifdef DEBUG_MODE ... #endif`. So that you don't forget to remove the code and it makes it to release build by accident ( common problem).
null
CC BY-SA 4.0
null
2022-10-24T06:41:13.563
2022-10-24T06:41:13.563
null
null
584,518
null
74,177,961
2
null
74,163,043
0
null
I will try to answer my own question here. I think I figured it out, but would appreciate any input on my method. I was able to do it without looping, but rather using pivot_table and merge. Import packages: ``` import pandas as pd from datetime import datetime import numpy as np ``` Import crime dataset: ``` crime_df = pd.read_csv("/Users/howard/Crime_Data.csv") ``` Create a list of dates in the range: ``` datelist = pd.date_range(start='01-01-2011', end='12-31-2015', freq='1d') ``` Create variables for the length of this date list and length of unique districts list: ``` nd = len(datelist) nu = len(df_crime['District'].unique()) ``` Create dataframe combining dates and districts: ``` date_df = pd.DataFrame({'District':df_crime['District'].unique().tolist()*nd, 'Date':np.repeat(datelist,nu)}) ``` Now we turn to our crime dataset. I added a column of 1s to have something to sum in the next step: ``` crime_df["ones"] = 1 ``` Next we take our crime data and put it in wide form using Pandas pivot_table: ``` crime_df = pd.pivot_table(crime_df,index=["District","Date"], columns="Crime Type", aggfunc="sum") ``` This gave me stacked-level columns and an unnecessary index, so I removed them with the following: ``` crime_df.columns.droplevel() crime_df.reset_index(inplace=True) ``` The final step is to merge the two datasets. I want to put date_df first and merge on that because it includes all the dates in the range and all the districts included for each date. Thus, this uses a Left merge. ``` final_df = pd.merge(date_df, crime_df, on=["Date", "District"],how="left") ``` Now I can finish by filling in NaN with 0s ``` final_df.fillna(0, inplace=True) ``` Our final dataframe is in the correct form to do time series analyses - regressions, plotting, etc. Many of the plots in matplotlib.pyplot that I use are easier to make if the date column is the index. This can be done like this: ``` df_final = df_final.set_index(['Date']) ``` That's it! Hope this helps others and please comment on any way to improve.
null
CC BY-SA 4.0
null
2022-10-24T07:45:30.197
2022-10-24T07:45:30.197
null
null
20,307,767
null
74,178,384
2
null
74,177,960
1
null
One option would be to first convert your `Gender` column to a `factor`. Afterwards you could use the `labels` argument of `scale_x_discrete` to assign your desired labels for 0 and 1. And for coloring you could basically do the same. Just map `factor(Gender)` on the `color` aes then set your desired colors via the `values` argument of `scale_color_manual`: Using some fake random example data: ``` set.seed(123) # Create example data data <- data.frame( Gender = rep(c(0,1), 100), BetsA = runif(200, 0, 40000) ) library(ggplot2) ggplot(data = data, aes(x = factor(Gender), y = BetsA, color = factor(Gender) )) + geom_point(alpha = 0.1) + scale_x_discrete(labels = c("0" = "Male", "1" = "Female")) + scale_color_manual(values = c("0" = "blue", "1" = "purple"), labels = c("0" = "Male", "1" = "Female")) + ggtitle(label = "Gender Correlated with Total Number of Bets") + xlab(label = "Gender of Gambler") + ylab(label = "Total Number of Bets" ) ``` ![](https://i.imgur.com/4KS9fqd.png)
null
CC BY-SA 4.0
null
2022-10-24T08:27:00.643
2022-10-24T08:27:00.643
null
null
12,993,861
null
74,178,393
2
null
74,178,318
2
null
You can use `VLOOKUP()` In table with the coins symbols and value as you described ( where symbols is column A and name column B) ``` =VLOOKUP([SYMBOL];[A1:B6];2;FALSE) ``` Where: `Symbol` is the symbol you want to replace ( you can use cell reference ); `A1:B6` is the table you want to look in; `2` is the column in wich you take the value; `False` as the values are not ordered by a number, just use false. `;``,` [Here's an example you could use](https://docs.google.com/spreadsheets/d/112cEPxZwk4wVKY9sobSVHdl7cerx-JSNDaWxZk28cPk/edit?usp=sharing)
null
CC BY-SA 4.0
null
2022-10-24T08:27:23.747
2022-10-24T08:32:39.187
2022-10-24T08:32:39.187
19,075,135
19,075,135
null
74,178,759
2
null
74,152,447
0
null
There are some unclear things in your question like how to treat row no. 6 when start date is on saturday and end date is on monday. If someone worked that long (illegal in most countries) isn't it all overtime. If it isn't then what is regular working time (9 to 5 ???). Anyway, here is one of the ways to do it - a descriptive one. First the sample data: ``` WITH tbl AS ( Select 1 "ID", To_Date('29-AUG-22 15:30:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('29-AUG-22 17:30:00', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual Union All Select 2 "ID", To_Date('30-AUG-22 15:30:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('30-AUG-22 20:30:00', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual Union All Select 3 "ID", To_Date('31-AUG-22 15:30:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('31-AUG-22 17:00:00', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual Union All Select 4 "ID", To_Date('01-SEP-22 17:45:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('01-SEP-22 23:45:10', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual Union All Select 5 "ID", To_Date('02-SEP-22 15:45:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('02-SEP-22 23:59:00', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual Union All Select 6 "ID", To_Date('27-AUG-22 10:30:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('29-AUG-22 17:30:00', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual Union All Select 7 "ID", To_Date('28-AUG-22 11:30:00', 'dd-MON-yy hh24:mi:ss') "START_TIME", To_Date('28-AUG-22 20:30:00', 'dd-MON-yy hh24:mi:ss') "END_TIME" From Dual ), ``` ... to show the data in a different way there is another CTE named grid... ``` grid AS ( Select ID "ID", To_Char(START_TIME, 'dd-MON-yy') "START_DATE", To_Char(START_TIME, 'DY') "START_DAY", To_Char(START_TIME, 'hh24:mi:ss') "START_TIME", CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN 'Weekend' ELSE 'Workday' END "START_TYPE", -- To_Char(END_TIME, 'dd-MON-yy') "END_DATE", To_Char(END_TIME, 'DY') "END_DAY", To_Char(END_TIME, 'hh24:mi:ss') "END_TIME", CASE WHEN To_Char(END_TIME, 'DY') IN('SAT', 'SUN') THEN 'Weekend' ELSE 'Workday' END "END_TYPE", -- CASE WHEN TRUNC(START_TIME, 'dd') = TRUNC(END_TIME, 'dd') THEN CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN To_Char(START_TIME, 'hh24:mi:ss') || ' - ' || To_Char(END_TIME, 'hh24:mi:ss') ELSE CASE WHEN To_Char(START_TIME, 'hh24:mi:ss') > '17:00:00' And To_Char(END_TIME, 'hh24:mi:ss') > To_Char(START_TIME, 'hh24:mi:ss') THEN To_Char(START_TIME, 'hh24:mi:ss') || ' - ' || To_Char(END_TIME, 'hh24:mi:ss') WHEN To_Char(START_TIME, 'hh24:mi:ss') <= '17:00:00' And To_Char(END_TIME, 'hh24:mi:ss') >= '17:00:00' THEN '17:00:00 - ' || To_Char(END_TIME, 'hh24:mi:ss') END END ELSE CASE WHEN TRUNC(END_TIME, 'dd') - TRUNC(START_TIME, 'dd') = 1 THEN CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN To_Char(START_TIME, 'hh24:mi:ss') || ' - ' ELSE '17:00:00 - ' END || REPLACE(To_Char(TRUNC(END_TIME, 'dd') - 1, 'hh24:mi:ss'), '00:00:00', '23:59:59') WHEN TRUNC(END_TIME, 'dd') - TRUNC(START_TIME, 'dd') = 2 THEN CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN To_Char(START_TIME, 'hh24:mi:ss') || ' - ' ELSE '17:00:00 - ' END || REPLACE(To_Char(TRUNC(END_TIME, 'dd') - 2, 'hh24:mi:ss'), '00:00:00', '23:59:59') ELSE To_Char(START_TIME, 'hh24:mi:ss') || ' - ' || To_Char(END_TIME, 'hh24:mi:ss') END END "OVERTIME_SPAN_0", CASE WHEN TRUNC(END_TIME, 'dd') - TRUNC(START_TIME, 'dd') = 1 THEN CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN To_Char(TRUNC(START_TIME, 'dd') + 1, 'hh24:mi:ss') || ' - ' ELSE '17:00:00 - ' END || REPLACE(To_Char(END_TIME, 'hh24:mi:ss'), '00:00:00', '23:59:59') WHEN TRUNC(END_TIME, 'dd') - TRUNC(START_TIME, 'dd') = 2 THEN CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN To_Char(TRUNC(START_TIME, 'dd') + 1, 'hh24:mi:ss') || ' - ' ELSE '17:00:00 - ' END || REPLACE(To_Char(TRUNC(END_TIME, 'dd') - 1, 'hh24:mi:ss'), '00:00:00', '23:59:59') ELSE Null END "OVERTIME_SPAN_1", CASE WHEN TRUNC(END_TIME, 'dd') - TRUNC(START_TIME, 'dd') = 2 THEN CASE WHEN To_Char(START_TIME, 'DY') IN('SAT', 'SUN') THEN To_Char(TRUNC(START_TIME, 'dd') + 2, 'hh24:mi:ss') || ' - ' ELSE '17:00:00 - ' END || To_Char(END_TIME, 'hh24:mi:ss') ELSE Null END "OVERTIME_SPAN_2" From tbl ) ``` ... the grid resulting dataset looks like this: ``` /* ID START_DATE START_DAY START_TIME START_TYPE END_DATE END_DAY END_TIME END_TYPE OVERTIME_SPAN_0 OVERTIME_SPAN_1 OVERTIME_SPAN_2 ----- ---------- --------- ---------- ---------- --------- ------- -------- -------- --------------------- ----------------------- ------------------- 1 29-AUG-22 MON 15:30:00 Workday 29-AUG-22 MON 17:30:00 Workday 17:00:00 - 17:30:00 2 30-AUG-22 TUE 15:30:00 Workday 30-AUG-22 TUE 20:30:00 Workday 17:00:00 - 20:30:00 3 31-AUG-22 WED 15:30:00 Workday 31-AUG-22 WED 17:00:00 Workday 17:00:00 - 17:00:00 4 01-SEP-22 THU 17:45:00 Workday 01-SEP-22 THU 23:45:10 Workday 17:45:00 - 23:45:10 5 02-SEP-22 FRI 15:45:00 Workday 02-SEP-22 FRI 23:59:00 Workday 17:00:00 - 23:59:00 6 27-AUG-22 SAT 10:30:00 Weekend 29-AUG-22 MON 17:30:00 Workday 10:30:00 - 23:59:59 00:00:00 - 23:59:59 00:00:00 - 17:30:00 7 28-AUG-22 SUN 11:30:00 Weekend 28-AUG-22 SUN 20:30:00 Weekend 11:30:00 - 20:30:00 */ ``` There are some data derived from sample data to test and show you the ways to split the data in a way that is suitable to folow the logic from your question. ...now we have all time spans that we need to calculate the overtimes. Here is the main query: ``` SELECT grid.ID "ID", START_DATE, START_TIME, END_DATE, END_TIME, OVERTIME_SPAN_0, CASE WHEN OVERTIME_SPAN_0 Is Null THEN 0 ELSE ( (To_Number(SubStr(OVERTIME_SPAN_0, 12, 2)) * 3600) + (To_Number(SubStr(OVERTIME_SPAN_0, 15, 2)) * 60) + To_Number(SubStr(OVERTIME_SPAN_0, 18, 2)) ) - ( (To_Number(SubStr(OVERTIME_SPAN_0, 1, 2)) * 3600) + (To_Number(SubStr(OVERTIME_SPAN_0, 4, 2)) * 60) + To_Number(SubStr(OVERTIME_SPAN_0, 7, 2)) ) END "OVERTIME_0", OVERTIME_SPAN_1, CASE WHEN OVERTIME_SPAN_1 Is Null THEN 0 ELSE ( (To_Number(SubStr(OVERTIME_SPAN_1, 12, 2)) * 3600) + (To_Number(SubStr(OVERTIME_SPAN_1, 15, 2)) * 60) + To_Number(SubStr(OVERTIME_SPAN_1, 18, 2)) ) - ( (To_Number(SubStr(OVERTIME_SPAN_1, 1, 2)) * 3600) + (To_Number(SubStr(OVERTIME_SPAN_1, 4, 2)) * 60) + To_Number(SubStr(OVERTIME_SPAN_1, 7, 2)) ) END "OVERTIME_1", OVERTIME_SPAN_2, CASE WHEN OVERTIME_SPAN_1 Is Null THEN 0 ELSE ( (To_Number(SubStr(OVERTIME_SPAN_2, 12, 2)) * 3600) + (To_Number(SubStr(OVERTIME_SPAN_2, 15, 2)) * 60) + To_Number(SubStr(OVERTIME_SPAN_2, 18, 2)) ) - ( (To_Number(SubStr(OVERTIME_SPAN_2, 1, 2)) * 3600) + (To_Number(SubStr(OVERTIME_SPAN_2, 4, 2)) * 60) + To_Number(SubStr(OVERTIME_SPAN_2, 7, 2)) ) END "OVERTIME_2" FROM grid /* R e s u l t : ID START_DATE START_TIME END_DATE END_TIME OVERTIME_SPAN_0 OVERTIME_0 OVERTIME_SPAN_1 OVERTIME_1 OVERTIME_SPAN_2 OVERTIME_2 ------ ---------- ---------- --------- -------- -------------------- ---------- --------------------- ---------- ------------------- ---------- 1 29-AUG-22 15:30:00 29-AUG-22 17:30:00 17:00:00 - 17:30:00 1800 0 0 2 30-AUG-22 15:30:00 30-AUG-22 20:30:00 17:00:00 - 20:30:00 12600 0 0 3 31-AUG-22 15:30:00 31-AUG-22 17:00:00 17:00:00 - 17:00:00 0 0 0 4 01-SEP-22 17:45:00 01-SEP-22 23:45:10 17:45:00 - 23:45:10 21610 0 0 5 02-SEP-22 15:45:00 02-SEP-22 23:59:00 17:00:00 - 23:59:00 25140 0 0 6 27-AUG-22 10:30:00 29-AUG-22 17:30:00 10:30:00 - 23:59:59 48599 00:00:00 - 23:59:59 86399 00:00:00 - 17:30:00 63000 7 28-AUG-22 11:30:00 28-AUG-22 20:30:00 11:30:00 - 20:30:00 32400 0 0 */ ``` ... and the resulting dataset containes overtimes (in seconds) calculated from the sample data. As said, there are still some questions regarding the overtime, but I hope this could be helpfull to you to get your own way to deal with it. Regards...
null
CC BY-SA 4.0
null
2022-10-24T09:00:43.117
2022-10-24T09:00:43.117
null
null
19,023,353
null
74,178,802
2
null
74,169,389
0
null
You can add a loop like this: ``` \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning} \begin{document} \begin{tikzpicture}[ node distance=1.5cm and 1cm, ar/.style={->,>=latex}, middle_node/.style={ draw, text width=1.5cm, minimum height=0.75cm, align=center }, end_node/.style={ draw, text width=1cm, minimum height=0.55cm, align=center } ] % nodes \node[end_node] (start) {\textbf{start}}; \node[middle_node,right=of start] (first_step) {a}; \node[middle_node,right=of first_step] (second_step) {b}; \node[middle_node,right=of second_step] (third_step) {c}; \node[end_node, right=of third_step] (stop) {\textbf{stop}}; % lines \draw[ar] (start) -- (first_step); % here i don know how to bend this arrow to the same box \draw[ar,out=120,in=60,<-] (first_step.north west) to (first_step.north east); \draw[ar] (first_step) -- (second_step); \draw[ar] (second_step) -- (third_step); \draw[ar] (third_step) -- (stop); \end{tikzpicture} \end{document} ``` [](https://i.stack.imgur.com/Al9bj.png)
null
CC BY-SA 4.0
null
2022-10-24T09:04:39.667
2022-10-24T09:04:39.667
null
null
2,777,074
null
74,179,065
2
null
30,729,543
0
null
I made an online generator to create the shape using only CSS and clip-path: [https://css-generators.com/starburst-shape/](https://css-generators.com/starburst-shape/). It also provides the border-only version and since it uses clip-path, we can have gradient coloration ``` .box { width: 200px; aspect-ratio: 1; background: linear-gradient(red, blue); display: inline-block; margin: 10px; clip-path: polygon(100.00% 50.00%, 79.54% 55.21%, 96.98% 67.10%, 75.98% 65.00%, 88.30% 82.14%, 69.28% 72.98%, 75.00% 93.30%, 60.26% 78.19%, 58.68% 99.24%, 50.00% 80.00%, 41.32% 99.24%, 39.74% 78.19%, 25.00% 93.30%, 30.72% 72.98%, 11.70% 82.14%, 24.02% 65.00%, 3.02% 67.10%, 20.46% 55.21%, 0.00% 50.00%, 20.46% 44.79%, 3.02% 32.90%, 24.02% 35.00%, 11.70% 17.86%, 30.72% 27.02%, 25.00% 6.70%, 39.74% 21.81%, 41.32% 0.76%, 50.00% 20.00%, 58.68% 0.76%, 60.26% 21.81%, 75.00% 6.70%, 69.28% 27.02%, 88.30% 17.86%, 75.98% 35.00%, 96.98% 32.90%, 79.54% 44.79%); } .alt { clip-path: polygon(100.00% 50.00%, 86.54% 55.79%, 97.55% 65.45%, 82.97% 66.80%, 90.45% 79.39%, 76.16% 76.16%, 79.39% 90.45%, 66.80% 82.97%, 65.45% 97.55%, 55.79% 86.54%, 50.00% 100.00%, 44.21% 86.54%, 34.55% 97.55%, 33.20% 82.97%, 20.61% 90.45%, 23.84% 76.16%, 9.55% 79.39%, 17.03% 66.80%, 2.45% 65.45%, 13.46% 55.79%, 0.00% 50.00%, 13.46% 44.21%, 2.45% 34.55%, 17.03% 33.20%, 9.55% 20.61%, 23.84% 23.84%, 20.61% 9.55%, 33.20% 17.03%, 34.55% 2.45%, 44.21% 13.46%, 50.00% 0.00%, 55.79% 13.46%, 65.45% 2.45%, 66.80% 17.03%, 79.39% 9.55%, 76.16% 23.84%, 90.45% 20.61%, 82.97% 33.20%, 97.55% 34.55%, 86.54% 44.21%, 100% 50%, calc(100% - 25px) 50%, calc(86.54% - 18.27px) calc(44.21% - -2.89px), calc(97.55% - 23.78px) calc(34.55% - -7.73px), calc(82.97% - 16.48px) calc(33.20% - -8.40px), calc(90.45% - 20.23px) calc(20.61% - -14.69px), calc(76.16% - 13.08px) calc(23.84% - -13.08px), calc(79.39% - 14.69px) calc(9.55% - -20.23px), calc(66.80% - 8.40px) calc(17.03% - -16.48px), calc(65.45% - 7.73px) calc(2.45% - -23.78px), calc(55.79% - 2.89px) calc(13.46% - -18.27px), calc(50.00% - -0.00px) calc(0.00% - -25.00px), calc(44.21% - -2.89px) calc(13.46% - -18.27px), calc(34.55% - -7.73px) calc(2.45% - -23.78px), calc(33.20% - -8.40px) calc(17.03% - -16.48px), calc(20.61% - -14.69px) calc(9.55% - -20.23px), calc(23.84% - -13.08px) calc(23.84% - -13.08px), calc(9.55% - -20.23px) calc(20.61% - -14.69px), calc(17.03% - -16.48px) calc(33.20% - -8.40px), calc(2.45% - -23.78px) calc(34.55% - -7.73px), calc(13.46% - -18.27px) calc(44.21% - -2.89px), calc(0.00% - -25.00px) calc(50.00% - 0.00px), calc(13.46% - -18.27px) calc(55.79% - 2.89px), calc(2.45% - -23.78px) calc(65.45% - 7.73px), calc(17.03% - -16.48px) calc(66.80% - 8.40px), calc(9.55% - -20.23px) calc(79.39% - 14.69px), calc(23.84% - -13.08px) calc(76.16% - 13.08px), calc(20.61% - -14.69px) calc(90.45% - 20.23px), calc(33.20% - -8.40px) calc(82.97% - 16.48px), calc(34.55% - -7.73px) calc(97.55% - 23.78px), calc(44.21% - -2.89px) calc(86.54% - 18.27px), calc(50.00% - 0.00px) calc(100.00% - 25.00px), calc(55.79% - 2.89px) calc(86.54% - 18.27px), calc(65.45% - 7.73px) calc(97.55% - 23.78px), calc(66.80% - 8.40px) calc(82.97% - 16.48px), calc(79.39% - 14.69px) calc(90.45% - 20.23px), calc(76.16% - 13.08px) calc(76.16% - 13.08px), calc(90.45% - 20.23px) calc(79.39% - 14.69px), calc(82.97% - 16.48px) calc(66.80% - 8.40px), calc(97.55% - 23.78px) calc(65.45% - 7.73px), calc(86.54% - 18.27px) calc(55.79% - 2.89px), calc(100.00% - 25.00px) calc(50.00% - 0.00px)); } ``` ``` <div class="box"></div> <div class="box alt"></div> ``` [](https://i.stack.imgur.com/3Exfp.png)
null
CC BY-SA 4.0
null
2022-10-24T09:26:25.780
2022-10-24T09:26:25.780
null
null
8,620,333
null
74,179,068
2
null
74,178,940
0
null
I think you need to use the `sticky` argument in you grid: > The width of a column (and height of a row) depends on all the widgets contained in it. That means some widgets could be smaller than the cells they are placed in. If so, where exactly should they be put within their cells?By default, if a cell is larger than the widget contained in it, the widget will be centered within it, both horizontally and vertically. The master's background color will display in the empty space around the widget. In the figure below, the widget in the top right is smaller than the cell allocated to it. The (white) background of the master fills the rest of the cell.The sticky option can change this default behavior. Its value is a string of 0 or more of the compass directions nsew, specifying which edges of the cell the widget should be "stuck" to. For example, a value of n (north) will jam the widget up against the top side, with any extra vertical space on the bottom; the widget will still be centered horizontally. A value of nw (north-west) means the widget will be stuck to the top left corner, with extra space on the bottom and right. So your code should look something like ``` self.notebook.grid(row=0, column=0, sticky='NSWE') ``` If you need more information check out this [article](https://tkdocs.com/tutorial/grid.html) Thanks to @acw1668 for the hint! I think the behaviour of the `ttk.Notebook` class is strange with the grid layout. I manage to "solve" it using pack in the `App()` class. Here is the code that worked for me: ``` class App(Tk): def __init__(self) -> None: super().__init__() self.notebook = Notebook(self) # <-- Widget I want to fill the window self.control_frame = ControlFrame(self.notebook) self.notebook.add(self.control_frame, text="Control", sticky="NSWE") self.notebook.pack(expand=True, fill=BOTH) class ControlFrame(Frame): def __init__(self, master): super().__init__(master) self.control_bar = ControlBar(self) self.control_bar.grid(row=0, column=0) self.grid(column=0, row=0, sticky="SNWE") self.connect_btn = Button(self, text="Connect") self.connect_btn.grid(row=1, column=0, sticky=N) self.log = Text(self) self.log.grid(row=2, column=0, sticky="NSWE") self.grid_rowconfigure(0, weight=0) self.grid_rowconfigure(1, weight=0) self.grid_rowconfigure(2, weight=1) self.grid_columnconfigure(0, weight=1) ``` Hope this solves your problem
null
CC BY-SA 4.0
null
2022-10-24T09:26:29.633
2022-10-25T07:18:32.563
2022-10-25T07:18:32.563
7,022,759
7,022,759
null
74,179,091
2
null
64,396,183
1
null
I beleive skeleton is what you are looking for. ``` import cv2 import timeit img = cv2.imread('Ggh8d - Copy.jpg',0) s = timeit.default_timer() thinned = cv2.ximgproc.thinning(img, thinningType = cv2.ximgproc.THINNING_ZHANGSUEN) e = timeit.default_timer() print(e-s) cv2.imwrite("thinned1.png", thinned) ``` [](https://i.stack.imgur.com/ALJMh.png) if smooth the edge a little bit [](https://i.stack.imgur.com/SRDBT.jpg) [](https://i.stack.imgur.com/4dm5P.png) Actually the line will not torch the yellow point, since the algorithm have to check distance from edges, yellow point is located on the edge.
null
CC BY-SA 4.0
null
2022-10-24T09:28:26.913
2022-10-24T09:28:26.913
null
null
7,364,454
null
74,179,108
2
null
74,170,089
0
null
In your example code, the object `swedishpines` belongs to the class `ppp`. When you type `plot(swedishpines)`, the method `plot.ppp` is invoked. To find out how to control this plot, see the help for the method `plot.ppp`. The help file for `plot.ppp` includes a section explaining how to draw axes. It also answers other common questions, like how to control the white space around the plot. Some people have suggested that you just extract the x, y coordinates from the object `swedishpines` and plot them in a scatterplot. This is not advisable, because that would change the relative scale (aspect ratio) of the x and y axes, so that the plot would be spatially distorted. Also `plot.default` artificially inserts a small space between the data points and the axis, by default, to make the plot easier to read; but this space does not exist in the original physical data.
null
CC BY-SA 4.0
null
2022-10-24T09:30:39.573
2022-10-24T09:30:39.573
null
null
10,988,264
null
74,179,520
2
null
67,664,900
0
null
you can use with 1. Action Class ( FirstModel ) class FirstModel with ChangeNotifier { List<Strings> _names = ["Sat", "Sat2", "Sat3"]; List<Strings> get names { return _names ; } } 2. Action Class ( SecondModel ) class SecondModel with ChangeNotifier { SecondModel(this.firstModel); final FirstModel firstModel; List<Strings> getNames(){ return firstModel.names; } } In just update the , example below ``` void main() { runApp( MultiProvider( providers: [ ChangeNotifierProvider<FirstModel>(create: (_) => FirstModel()), ChangeNotifierProxyProvider0<SecondModel>( create: (BuildContext context) => SecondModel(Provider.of<FirstModel>(context, listen: false)), update: (BuildContext context, SecondModel secondModel) => SecondModel(Provider.of<FirstModel>(context, listen: false)), ), ], child: MyApp(), ), ); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return const MaterialApp( home: MyHomePage(), ); } } ``` Similar other class also avilable... For More Information please refer below link... [ChangeNotifierProxyProvider0 class API](https://pub.dev/documentation/provider/latest/provider/ChangeNotifierProxyProvider0-class.html)
null
CC BY-SA 4.0
null
2022-10-24T10:04:45.473
2022-10-24T10:04:45.473
null
null
7,814,211
null
74,179,926
2
null
73,581,503
-1
null
I have been able to extract the midpoint by fitting an ellipse to the arc visible to the camera. The centroid of the ellipse is the required midpoint. [](https://i.stack.imgur.com/YNfTi.png) There will be wrong ellipses as well, which can be ignored. The steps to extract the ellipse were: - Extract the markers- Binarise and skeletonise- Fit ellipse to the arc (found a matlab function for this)- Get the centroid of the ellipse``` hsv_img=rgb2hsv(im); bin=new_hsv_img(:,:,3)>marker_th; %was chosen 0.35 %skeletonise skel=bwskel(bin); %use regionprops to get the pixelID list stats=regionprops(skel,'all'); for i=1:numel(stats) el = fit_ellipse(stats(i).PixelList(:,1),stats(i).PixelList(:,2)); ellipse_draw(el.a, el.b, -el.phi, el.X0_in, el.Y0_in, 'g'); ``` - [The link for fit_ellipse function](https://it.mathworks.com/matlabcentral/fileexchange/3215-fit_ellipse)- [Link for ellipse_draw function](https://it.mathworks.com/matlabcentral/fileexchange/289-ellipse-m)
null
CC BY-SA 4.0
null
2022-10-24T10:43:07.673
2022-10-24T10:43:07.673
null
null
16,490,717
null
74,179,996
2
null
24,577,403
1
null
In the Page view Controller you can write following code TransitionStyle scroll ``` required init?(coder aDecoder: NSCoder) { super.init(transitionStyle: .scroll, navigationOrientation: .horizontal, options: nil) } ``` For transitionStyle curl ``` required init?(coder aDecoder: NSCoder) { super.init(transitionStyle: .pageCurl, navigationOrientation: .horizontal, options: nil) } ``` Thank You
null
CC BY-SA 4.0
null
2022-10-24T10:49:17.770
2022-10-24T10:49:17.770
null
null
10,158,396
null
74,180,682
2
null
74,111,171
1
null
There's no way to address the underlying problem, as that's up to the browser. One workaround is to animate in steps, and find a value that's low, but still produces a smooth animation: ``` animation: breathe 4000ms infinite both steps(40); ``` This will produce a maximum of 40 "versions" of your font for all keyframes, as opposed to possible thousands.
null
CC BY-SA 4.0
null
2022-10-24T11:50:16.513
2022-10-24T11:50:16.513
null
null
3,041,343
null
74,180,888
2
null
25,119,193
0
null
Just a brief modification to the solution for better string formatting: I would recommend changing the format function to include latex formatting: ``` def y_fmt(x, y): return '${:2.1e}'.format(x).replace('e', '\\cdot 10^{') + '}$' ``` [](https://i.stack.imgur.com/ZPVQg.png)
null
CC BY-SA 4.0
null
2022-10-24T12:08:20.300
2022-10-24T12:08:20.300
null
null
10,156,184
null
74,180,908
2
null
74,180,706
1
null
Most likely your api returns a single user and not a list of them. If you still need a list you could wrap it in one. So maybe try this: ``` static Future<List<User>> getUser() async { var url = '${Constants.BASE_NO_TOKEN_DOMAIN}api?action=user_profile&token=${Constants.USER_TOKEN}'; final response = await http.get(Uri.parse(url)); final body = jsonDecode(response.body); return [User.fromJson(body['data'])]; } ``` EDIT: From you other comments I could conclude that the parsing of the user goes wrong. You do ``` avatar: json['avatar_100'] ``` but there is no field named that. You should try ``` avatar: json['avatar']['100'] ```
null
CC BY-SA 4.0
null
2022-10-24T12:10:40.273
2022-10-24T12:55:25.157
2022-10-24T12:55:25.157
1,514,861
1,514,861
null
74,181,412
2
null
74,180,706
2
null
First you have a Map not list so you need to change your `getUser` to this: ``` static Future<User> getUser() async {//<--- change this var url = '${Constants.BASE_NO_TOKEN_DOMAIN}api?action=user_profile&token=${Constants.USER_TOKEN}'; final response = await http.get(Uri.parse(url)); final body = jsonDecode(response.body); return User.fromJson(body['data']);//<--- change this } ``` second in your model class you are parsing `avatar` wrong, try this: ``` class User { final String fullName; final String avatar; final int phone; const User({ required this.fullName, required this.avatar, required this.phone, }); static User fromJson(json) => User( fullName: json['full_name'], avatar: json['avatar']['100'], // <--- change this phone: json['phone'], ); } ``` also change your `FutureBuilder` to this: ``` FutureBuilder<User>( ... ) ```
null
CC BY-SA 4.0
null
2022-10-24T12:55:41.440
2022-10-24T12:55:41.440
null
null
10,306,997
null
74,181,438
2
null
18,215,068
0
null
To eliminate an automatic added column, set `Width` of a `DataGridTextColumn` to `*` The following Code represents a DataGrid with two columns and no automatic added colums ``` <DataGrid AutoGenerateColumns="False" HeadersVisibility="Column"> <DataGrid.Columns> <DataGridTextColumn Width="*" Binding="{Binding asdf}" /> <DataGridTextColumn Width="*" Binding="{Binding zuio}" /> </DataGrid.Columns> </DataGrid> ```
null
CC BY-SA 4.0
null
2022-10-24T12:57:36.597
2022-10-24T13:25:59.370
2022-10-24T13:25:59.370
15,104,889
15,104,889
null
74,182,451
2
null
71,635,524
1
null
Try removing and adding the service back in IIS. It worked for me. It might be an incorrect path for the service the cause for this error.
null
CC BY-SA 4.0
null
2022-10-24T14:13:22.723
2022-11-04T17:33:34.153
2022-11-04T17:33:34.153
14,830,395
14,830,395
null
74,182,726
2
null
72,855,183
0
null
From your master node setup command, since you did not supply `K3S_TOKEN` so the token is generated, so make sure your `YOUR_MASTER_TOKEN` value is correct, it can be retrieved by running `sudo cat /var/lib/rancher/k3s/server/token` in the master node. The command you run in K3S agent doesn't look right, it is seems that you are mixing the command of joining cluster as `agent` and as `master`, make sure you know the difference between a HA cluster and non-HA cluster. To add K3s to the cluster, just run ``` export URL="https://<<Master IP address>>:6443" export TOKEN="<<TOKEN>>" curl -sfL https://get.k3s.io | K3S_URL=$URL K3S_TOKEN=$TOKEN sh - ``` Finally, as you are running it in AWS, make sure your `VPC` settings correct, it includes the right `Security Group` settings to allow communication to/from `IP range` and `Port range` between your `master` and `agent` node. Also, the `NACL` of your subnets. If you are doing it for POC purpose, just put all the instance in the same `public subnet` will save your.
null
CC BY-SA 4.0
null
2022-10-24T14:33:42.520
2022-10-24T14:33:42.520
null
null
15,603,575
null
74,182,898
2
null
74,129,135
0
null
With many tools, there are two different steps: 1. Calculate code coverage 2. Format code coverage results as HTML The first step usually creates a machine-readable XML or JSON file, as you have already discovered. This machine-readable file needs to be nicely formatted in the second step as HTML. We're using [ReportGenerator](https://reportgenerator.io/getstarted) for this, it's pretty straightforward. But there are also other tools that wrap those two steps into a single call, e. g. . They have a console runner for free and you can find out more [here](https://www.jetbrains.com/help/dotcover/dotCover__Console_Runner_Commands.html#cover).
null
CC BY-SA 4.0
null
2022-10-24T14:45:13.663
2022-10-24T14:45:13.663
null
null
4,919,526
null
74,184,375
2
null
26,324,990
-1
null
Maybe you can use window.location.href = "your string"; or window.location.href += "your string"; To change the title, you can simply just use document.getElementsByTagName("title").innerHTML = "your string"; Only problem is that it could perhaps say "redirect blocked", and/or maybe say "toggling to prevent the browser from hanging"
null
CC BY-SA 4.0
null
2022-10-24T16:51:01.997
2022-10-24T16:51:01.997
null
null
20,251,203
null
74,184,535
2
null
74,184,481
1
null
try: ``` =QUERY(A:D; "limit "&ROUNDUP(COUNTA(A2:A)/2); 1) ``` and: ``` =QUERY(A:D; "offset "&ROUNDUP(COUNTA(A2:A)/2); 1) ``` --- ## UPDATE same principle... ``` =ARRAYFORMULA(SPLIT(FLATTEN(SPLIT(QUERY(MAP(UNIQUE(FILTER(C2:C, C2:C<>""))*1, LAMBDA(x, QUERY(FLATTEN(QUERY(TRANSPOSE(QUERY(FILTER({A:C, D:D&"​"}, C:C=x), "limit "&ROUNDDOWN(COUNTA(FILTER(A:A, C:C=x))/2), )),,9^9)),,9^9))),,9^9), "​")), " ")) ``` and: ``` =ARRAYFORMULA(SPLIT(FLATTEN(SPLIT(QUERY(MAP(UNIQUE(FILTER(C2:C, C2:C<>""))*1, LAMBDA(x, QUERY(FLATTEN(QUERY(TRANSPOSE(QUERY(FILTER({A:C, D:D&"​"}, C:C=x), "offset "&ROUNDDOWN(COUNTA(FILTER(A:A, C:C=x))/2), )),,9^9)),,9^9))),,9^9), "​")), " ")) ``` [](https://i.stack.imgur.com/RVeRo.png)
null
CC BY-SA 4.0
null
2022-10-24T17:07:03.277
2022-10-24T18:36:27.717
2022-10-24T18:36:27.717
5,632,629
5,632,629
null
74,184,722
2
null
74,184,659
1
null
The issue is that you subset the vectors. Instead subset the data used for `geom_rug`: ``` library(ggplot2) ggplot(data = swiss) + geom_point(mapping = aes(x = Education, y = Fertility)) + geom_smooth(method = "lm", aes(x = Education, y = Fertility), se = FALSE) + geom_smooth( method = "loess", aes( x = Education, y = Fertility, col = "red" ), linetype = "dotted", lwd = 2, se = FALSE ) + geom_rug( data = subset(swiss, Agriculture >= 50), mapping = aes(x = Education, y = Fertility), color = "blue" ) #> `geom_smooth()` using formula 'y ~ x' #> `geom_smooth()` using formula 'y ~ x' ``` ![](https://i.imgur.com/45btswG.png) And to show the rug only at the bottom as in the image you posted then you have to set `sides="b"`: ``` library(ggplot2) ggplot(data = swiss) + geom_point(mapping = aes(x = Education, y = Fertility)) + geom_smooth(method = "lm", aes(x = Education, y = Fertility), se = FALSE) + geom_smooth( method = "loess", aes( x = Education, y = Fertility, col = "red" ), linetype = "dotted", lwd = 2, se = FALSE ) + geom_rug( data = subset(swiss, Agriculture >= 50), mapping = aes(x = Education, y = Fertility), color = "blue", sides = "b" ) #> `geom_smooth()` using formula 'y ~ x' #> `geom_smooth()` using formula 'y ~ x' ``` ![](https://i.imgur.com/gDMcrhc.png)
null
CC BY-SA 4.0
null
2022-10-24T17:27:31.330
2022-10-24T18:02:41.430
2022-10-24T18:02:41.430
12,993,861
12,993,861
null
74,184,749
2
null
49,228,926
0
null
``` SELECT DISTINCT CITY FROM STATION WHERE CITY REGEXP '^[^aeiou]' /*Checks City does not start with vowel*/ AND CITY REGEXP '[^aeiou]$'; /*Checks City does not end with vowel*/ ```
null
CC BY-SA 4.0
null
2022-10-24T17:29:42.693
2022-10-24T17:29:42.693
null
null
13,651,618
null
74,184,884
2
null
74,184,677
0
null
The simple answer is: no, this type of query isn't possible on Firebase. If you need this type of expressive query and want ad-hoc querying on your database, consider using a SQL database - those are much better suited for dynamic and ad-hoc queries. To the specifics of your question: - [Query based on multiple where clauses in Firebase](https://stackoverflow.com/questions/26700924/query-based-on-multiple-where-clauses-in-firebase)- [Firebase query if child of child contains a value](https://stackoverflow.com/questions/40656589/firebase-query-if-child-of-child-contains-a-value) But even with those cases covered, I don't think what you want to accomplish is possible on Firebase Realtime Database without extensive data duplication and client-side processing. Hence my recommendation to consider using a SQL database.
null
CC BY-SA 4.0
null
2022-10-24T17:42:08.287
2022-10-24T17:42:08.287
null
null
209,103
null
74,184,950
2
null
74,184,523
0
null
It looks like you never import the CSS file. Try adding `import "./index.css"` in the `src/App.js` file
null
CC BY-SA 4.0
null
2022-10-24T17:48:10.237
2022-10-24T17:48:10.237
null
null
9,088,682
null
74,185,182
2
null
46,569,139
0
null
Executing the following file setup environment, also you have qtcreator vs/cl.exe or mingw/g++? C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvarsall.bat
null
CC BY-SA 4.0
null
2022-10-24T18:13:41.083
2022-10-25T16:50:42.120
2022-10-25T16:50:42.120
5,346,581
5,346,581
null
74,185,185
2
null
68,566,244
1
null
I've got the same code, `print(category_index[detections['detection_classes'][0]+label_id_offset]["name"])` worked perfectly
null
CC BY-SA 4.0
null
2022-10-24T18:13:57.467
2022-10-24T18:13:57.467
null
null
20,324,139
null
74,185,215
2
null
74,184,677
0
null
Honestly, the Realtime Database will be more of a headache in this use case if you wish to handle complex filtering at the database level. You could filter based on one aspect and then perform additional filters client-side to suit your UI. If you wish to continue with Firebase, I would highly recommend Firestore instead. Your data structure would not have to change at all and you would be able to perform complex queries as you wish. The semantics are a bit different, but this is the equivalent query using Firestore: ``` import { collection, query, where } from "firebase/firestore"; import db from "./where/your/firebase/config/lives.js"; const recipesDatabase = collection(db, "cities"); const recipesByIngredient = query(recipesDatabase, where("time", "==", 15), where("ingredients", "array-contains", ["rice", "tomato", "onion"])); ``` The "array-contains" operator will find recipes using of the listed ingredients. Alternatively, you can use "array-contains-any" which will query all recipes containing of the searched ingredients. See Firestore docs [regarding indexes](https://firebase.google.com/docs/firestore/query-data/indexing) to help facilitate this.
null
CC BY-SA 4.0
null
2022-10-24T18:16:40.060
2022-10-24T18:16:40.060
null
null
20,317,091
null
74,185,384
2
null
74,130,132
0
null
It's worked for me: `run.setText("\u202B" + activity + "\u202B");`
null
CC BY-SA 4.0
null
2022-10-24T18:37:55.703
2022-10-24T18:37:55.703
null
null
20,284,979
null
74,185,716
2
null
74,184,481
1
null
The accepted answer is elegant and achieves what the OP needs. If someone wants to go the Apps Script way, here's one approach: ``` function myFunction() { const ss = SpreadsheetApp.getActiveSpreadsheet(); const rawSheet = ss.getSheetByName('Raw'); const testSheet = ss.getSheetByName('Test Group'); const controlSheet = ss.getSheetByName('Control Group'); const rawData = rawSheet.getDataRange().getDisplayValues(); // Returns a 2D array const headerRow = rawData.shift(); const timestampColumn = rawData.map(row => row[2]) const uniqueTimestamps = timestampColumn.filter((time, index) => timestampColumn.indexOf(time) === index); let testGroupAll = [headerRow]; let controlGroupAll = [headerRow]; for (time of uniqueTimestamps) { // slotData is test group after splicing below const slotData = rawData.filter(row => row[2] === time); const controlGroup = slotData.splice(0, Math.round(slotData.length / 2)); Logger.log(slotData); // test group - 2D array Logger.log(controlGroup); // 2D array testGroupAll = testGroupAll.concat(slotData); controlGroupAll = controlGroupAll.concat(controlGroup); } Logger.log(testGroupAll); Logger.log(controlGroupAll); testSheet.getDataRange().clear(); controlSheet.getDataRange().clear(); testSheet.getRange(1, 1, testGroupAll.length, 4).setValues(testGroupAll); controlSheet.getRange(1, 1, controlGroupAll.length, 4).setValues(controlGroupAll); } ``` [Edit]: That was unexpected. Thank you for reconsidering and marking this as the accepted answer. I'd like to reiterate that the answer from @player0 is elegant, though not comprehensible, but that's due to the nature of formulae themselves - be it Google Sheets or good old MS Excel. Those who are familiar with Google Sheets formulae would see that the use of `QUERY` with `limit` and `offset` alone takes away a lot of complexity that would otherwise have been there.
null
CC BY-SA 4.0
null
2022-10-24T19:09:32.427
2022-10-25T06:06:05.420
2022-10-25T06:06:05.420
10,276,412
10,276,412
null
74,185,772
2
null
50,402,976
0
null
Note that you can run the command from your shell using `firebase functions:shell` --- If you have [gcloud sdk](https://cloud.google.com/sdk/docs/install) installed: From your terminal - login into firebase: `firebase login`- create a folder and cd into it (or use your existing project) `mkdir sample_func && cd_$`- initialize directory for firebase function- `firebase init functions`- - Write the code in `functions/index.js`. Something like this:``` const functions = require("firebase-functions"); const admin = require("firebase-admin"); admin.initializeApp(); // Delete all users in firebase auth exports.deleteAllUsers = functions.https.onRequest(async (req, res) => { const listUsers = await admin.auth().listUsers(); for (const user of listUsers.users) { console.log(`Deleting user: ${user.email || 'anonymous'}`); await admin.auth().deleteUser(user.uid); // Wait to avoid hitting the rate limit. Note that this might cause // you function to timeout, in which case you might have to run the // functions multiple times. await new Promise((resolve) => setTimeout(resolve, 100)); } res.send("All users deleted"); }); ``` - Run it locally. From within the folder:- `firebase functions:shell`- `deleteAllUsers()`
null
CC BY-SA 4.0
null
2022-10-24T19:13:48.083
2022-10-24T19:13:48.083
null
null
9,116,982
null
74,185,790
2
null
74,184,103
2
null
The error says cannot find `#username` but clearly it is present, so you may have a `shadowroot` in the DOM above the `<input>`. If so, add a configuration to allow searching within, in `cypress.config.js` ``` const { defineConfig } = require('cypress') module.exports = defineConfig({ e2e: { baseUrl: 'http://localhost:1234' }, includeShadowDom: true, }) ``` If you don't see `shadowroot`, look for an `<iframe>` element. Handling an iframe is best done with [Cypress iframe](https://www.npmjs.com/package/cypress-iframe)
null
CC BY-SA 4.0
null
2022-10-24T19:15:27.727
2022-10-24T19:15:27.727
null
null
19,867,290
null
74,186,168
2
null
74,184,562
0
null
From what I understand, you have the following structure: ``` [ ( (A1, B1, C1, D1, Y, Z), (A2, B2, C2, D2, W, X) ), ... ] ``` And you're trying to convert to a Dataframe with this structure: ``` A B C D W X Y Z ---------------------------------- A1 B1 C1 D1 NaN NaN Y Z A2 B2 C2 D2 W X NaN NaN ``` I'm sure there are a few different ways to tackle that, I would be inclined to create two separate Dataframes, one for the first set of tuples one for the second, then do an outer merge. The following worked when I tried it with a sample of your data: ``` # Create dictionaries from the first and second tuples, respectively orders = {i: all_orders[i][0] for i in range(len(all_orders))} stop_orders = {i: all_orders[i][1] for i in range(len(all_orders))} # Convert dictionaries into DFs and give appropriate column names orders_df = pd.DataFrame.from_dict(orders, orient="index") orders_df.columns = ["ID", "Date", "Price", "Label", "Amount Invested", "Stock Shares"] stop_orders_df = pd.DataFrame.from_dict(stop_orders, orient="index") stop_orders_df.columns = ["ID", "Date", "Price", "Label", "Profit/Loss ($)", "Profit/Loss (%)"] # Execute an outer merge so all columns are retained and columns that are in one DF but not in the other are filled with NA all_orders_df = pd.merge(orders_df, stop_orders_df, how="outer") ``` Hope that helps! There may be more performant approaches if you have a lot of data, but the above should get the job done.
null
CC BY-SA 4.0
null
2022-10-24T19:57:08.503
2022-10-24T19:57:08.503
null
null
14,873,913
null
74,186,310
2
null
74,182,481
0
null
You'll need code that moves all the snake's content with one position. Therefore you need to keep track of the coordinates that are part of the snake -- not just the last position. Also avoid code repetition, and try to deal with a move in one common way. I didn't quite get the input format, so I just limited the valid input to the 4 direction letters and "Q" to quit: ``` def GiveTab(tab): for row in tab: print(" ".join(row)) tab = [ ['0', '.', '1', '.', '.'], ['.', '.', '2', '.', '.'], ['.', '.', '.', '1', '3'], ['.', '.', '.', '.', '.'], ['5', '.', '.', '.', '.'] ] snake = [(0, 0)] while True: GiveTab(tab) move = input('Give order: ').upper() if move == "Q": print("thanks for playing") break if not move in "URDL": print("invalid move") continue i = "URDL".index(move) drow, dcol = ((-1, 0), (0, 1), (1, 0), (0, -1))[i] nextrow = snake[0][0] + drow nextcol = snake[0][1] + dcol if not (0 <= nextrow < len(tab)) or not (0 <= nextcol < len(tab[0])): print("invalid move") continue snake.insert(0, (nextrow, nextcol)) if tab[nextrow][nextcol] == '.': # need to move the whole snake for (nextrow, nextcol), (prevrow, prevcol) in zip(snake, snake[1:]): tab[nextrow][nextcol] = tab[prevrow][prevcol] delrow, delcol = snake.pop() tab[delrow][delcol] = '.' ```
null
CC BY-SA 4.0
null
2022-10-24T20:11:43.643
2022-10-24T20:11:43.643
null
null
5,459,839
null
74,186,600
2
null
46,582,604
0
null
Simply Remove items from the list and then pass the updated list to the Adapter. For example: I removed the items from the list in the HomePage before passing it to Users Adapter ``` for (DataSnapshot dataSnapshot : snapshot.getChildren()) { Users user = dataSnapshot.getValue(Users.class); if(!FirebaseAuth.getInstance().getCurrentUser().getUid().equals(user.getUid())) userArrayList.add(user); } ```
null
CC BY-SA 4.0
null
2022-10-24T20:40:37.600
2022-10-24T20:40:37.600
null
null
12,735,757
null
74,187,353
2
null
28,662,039
0
null
I know this is an old question, but here is what worked for me. `<style name="AppTheme" parent="Theme.MaterialComponents.DayNight.DarkActionBar"> <item name="autoCompleteTextViewStyle">@style/MyAutoCompleteTextView</item> <item name="android:dropDownItemStyle">@style/MyDropDownItemStyle</item> </style>` Below changes the suggestion dropdown background color on light theme. `<style name="MyAutoCompleteTextView" parent="Widget.AppCompat.Light.AutoCompleteTextView"><item name="android:popupBackground">@android:color/white</item></style>` Below changes the suggestion dropdown text color to black on light theme. `<style name="MyDropDownItemStyle" parent="Widget.AppCompat.DropDownItem.Spinner"><item name="android:textColor">@color/black</item></style>`
null
CC BY-SA 4.0
null
2022-10-24T22:15:31.943
2022-10-24T22:15:31.943
null
null
2,476,537
null
74,187,643
2
null
74,184,523
0
null
I got it to work when I got rid of all of the "className" on ./card/card but I am not sure why.
null
CC BY-SA 4.0
null
2022-10-24T23:05:22.293
2022-10-24T23:05:22.293
null
null
16,883,589
null
74,187,690
2
null
74,187,600
0
null
Seems like you have two matrices (one 1x43265 and one 16x512). You try to multiply them but its product is mathematically not defined. You need one matrix to be a (a x b) matrix and the other to be a (b x c) matrix. Thats why your program can't run. If your images are a test dataset try to follow the instructions step by step. If not, your preprocessing is probably bad.
null
CC BY-SA 4.0
null
2022-10-24T23:15:00.093
2022-10-24T23:15:00.093
null
null
19,527,123
null
74,187,676
2
null
74,186,254
1
null
Creating lists with different names can be totally wrong idea. You should rather create single list with sublisst (and indexes instead of names) or single dictinary with names as keys. Or even better you should create single `DataFrame` with all values (in rows or columns). It will be more useful for next calculations. And all this may not need `for`-loop. --- But I think you may do it in different way. You could create column with `Year Month` ``` df['Year_Month'] = df['Close_Date'].dt.strftime('%Y-%m') ``` And later use `groupby()` to execute function on every month without using `for`-loops. ``` averages = df.groupby('Year_Month').mean() sizes = df.groupby('Year_Month').size() ``` --- Minimal working code with example data: ``` import pandas as pd #df = pd.read_excel('short data.xlsx', parse_dates=['Closed Date Time']) data = { 'Closed Date Time': ['2022.10.25 01:00', '2022.10.24 01:00', '2018.10.25 01:00', '2018.10.24 01:00', '2018.10.23 01:00'], 'Price': [1, 2, 3, 4, 5], 'User': ['A','A','A','B','C'], } df = pd.DataFrame(data) print(df) df['Closed Date Time'] = pd.to_datetime(df['Closed Date Time']) df['Close_Date'] = df['Closed Date Time'].dt.date df['Close_Date'] = pd.to_datetime(df['Close_Date']) df['Year_Month'] = df['Close_Date'].dt.strftime('%Y-%m') print(df) print('\n--- averages ---\n') averages = df.groupby('Year_Month').mean() print(averages) print('\n--- sizes ---\n') sizes = df.groupby('Year_Month').size() print(sizes) ``` Result: ``` Closed Date Time Price 0 2022.10.25 01:00 1 1 2022.10.24 01:00 2 2 2018.10.25 01:00 3 3 2018.10.24 01:00 4 4 2018.10.23 01:00 5 Closed Date Time Price Close_Date Year_Month 0 2022-10-25 01:00:00 1 2022-10-25 2022-10 1 2022-10-24 01:00:00 2 2022-10-24 2022-10 2 2018-10-25 01:00:00 3 2018-10-25 2018-10 3 2018-10-24 01:00:00 4 2018-10-24 2018-10 4 2018-10-23 01:00:00 5 2018-10-23 2018-10 --- averages --- Price Year_Month 2018-10 4.0 2022-10 1.5 --- sizes --- Year_Month 2018-10 3 2022-10 2 dtype: int64 ``` --- ``` data = df.groupby('Year_Month').agg({'Price':['mean','size']}) print(data) ``` Result: ``` Price mean size Year_Month 2018-10 4.0 3 2022-10 1.5 2 ``` --- Example with `.groupby()` and `.apply()` to execute more complex function. And later it uses `.to_dict()` and `.plot()` ``` import pandas as pd #df = pd.read_excel('short data.xlsx', parse_dates=['Closed Date Time']) data = { 'Closed Date Time': ['2022.10.25 01:00', '2022.10.24 01:00', '2018.10.25 01:00', '2018.10.24 01:00', '2018.10.23 01:00'], 'Price': [1, 2, 3, 4, 5], 'User': ['A','A','A','B','C'], } df = pd.DataFrame(data) #print(df) df['Closed Date Time'] = pd.to_datetime(df['Closed Date Time']) df['Close_Date'] = df['Closed Date Time'].dt.date df['Close_Date'] = pd.to_datetime(df['Close_Date']) df['Year_Month'] = df['Close_Date'].dt.strftime('%Y-%m') #print(df) def calculate(group): #print(group) #print(group['Price'].mean()) #print(group['User'].unique().size) result = { 'Mean': group['Price'].mean(), 'Users': group['User'].unique().size, 'Div': group['Price'].mean()/group['User'].unique().size } return pd.Series(result) data = df.groupby('Year_Month').apply(calculate) print(data) print('--- dict ---') print(data.to_dict()) #print(data.to_dict('dict')) print('--- records ---') print(data.to_dict('records')) print('--- list ---') print(data.to_dict('list')) print('--- index ---') print(data.to_dict('index')) import matplotlib.pyplot as plt data.plot(kind='bar', rot=0) plt.show() ``` Result: ``` Mean Users Div Year_Month 2018-10 4.0 3.0 1.333333 2022-10 1.5 1.0 1.500000 --- dict --- {'Mean': {'2018-10': 4.0, '2022-10': 1.5}, 'Users': {'2018-10': 3.0, '2022-10': 1.0}, 'Div': {'2018-10': 1.3333333333333333, '2022-10': 1.5}} --- records --- [{'Mean': 4.0, 'Users': 3.0, 'Div': 1.3333333333333333}, {'Mean': 1.5, 'Users': 1.0, 'Div': 1.5}] --- list --- {'Mean': [4.0, 1.5], 'Users': [3.0, 1.0], 'Div': [1.3333333333333333, 1.5]} --- index --- {'2018-10': {'Mean': 4.0, 'Users': 3.0, 'Div': 1.3333333333333333}, '2022-10': {'Mean': 1.5, 'Users': 1.0, 'Div': 1.5}} ``` [](https://i.stack.imgur.com/crDyv.png)
null
CC BY-SA 4.0
null
2022-10-24T23:12:41.757
2022-10-25T08:16:45.860
2022-10-25T08:16:45.860
1,832,058
1,832,058
null
74,187,830
2
null
74,186,108
2
null
## Preface: SQL Server's Full-Text Search engine comes with support for many binary (non-text) file-formats, including most Microsoft Office document formats (including the old `.doc`/`.xls` files, and the post-2007 OOXML+Zip-based `.docx`/`.xlsx` formats. You can get a list of supported file-types (denoted by their filename extension rather than their MIME Content-Type, which I'd have preferred...) by running `select * from sys.fulltext_document_types;` against your server. (If you're running on-prem SQL Server ( Azure SQL) you can install custom `IFilter` libraries to add support for additional file-types, [though Azure SQL is limited to first-party Microsoft-provided IFilter libraries](https://learn.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-fulltext-document-types-transact-sql?view=sql-server-ver16)). ## The problem: - You apparently have an Excel `.xlsx` (which is a file format) stored in a `varbinary(n)` column.- The Excel file indexed by SQL Server Full-Text Search:- Full-Text Search can read the file because it is able to use the `IFilter` library for Excel `.xlsx` files.- ... which is why the `CONTAINS` predicate function (and the `CONTAINSTABLE`, `FREETEXT`, and `FREETEXTTABLE` functions too, presumably) is able to succesfully and correctly return query results that show your Excel file contains that text.- ...but does not attempt to load or read the stored Excel `.xlsx` file (i.e. by being-aware of the file-format and unpacking the OOXML `.zip` container, then reading the inner OOXML `.xml` files to go through the actual content-within).- `varbinary`- ...but what (I think?) you want is to be able to show a from the Excel workbook that corresponds to the user's search terms found in the document (just like in Google/Bing Web Search results).-
null
CC BY-SA 4.0
null
2022-10-24T23:42:13.420
2022-10-24T23:42:13.420
null
null
159,145
null
74,187,881
2
null
61,436,764
0
null
`LocalStorage` does not work with `<React.StrictMode>` if within the render method. If you have just created a new application you may find that your `index.jsx` or `main.jsx` is wrapping your root DOM element by default similar to this: ``` ReactDOM.createRoot(document.getElementById("root")).render( <React.StrictMode> <App /> </React.StrictMode> ); ``` If you are ok with removing strict mode then LocalStorage will return to usual behavior: ``` ReactDOM.createRoot(document.getElementById("root")).render(<App />); ``` Strict mode also will double render your app if you are in development mode, but not in production [see here](https://reactjs.org/docs/strict-mode.html). However, in production LocalStorage will also work even with StrictMode wrapping your App component as was shown in the first code block above. > "StrictMode is a tool for highlighting potential problems in an application. Like Fragment, StrictMode does not render any visible UI. It activates additional checks and warnings for its descendants."
null
CC BY-SA 4.0
null
2022-10-24T23:53:06.517
2022-10-24T23:59:48.027
2022-10-24T23:59:48.027
1,783,588
1,783,588
null
74,188,344
2
null
74,142,449
0
null
It's not clear why you are using `HtmlService.createTemplateFromFile`, but from the image it's clear there at least one error, the script misses two methods: - `HtmlService.HtmlTemplate.evaluate()`- `HtmlService.HtmlOutput.getContent()``HtmlService.HtmlOutput` Another option that looks to be an error is the use of `GmailApp.sendEmail(string,string,string)` method, as the third parameter should be a string to be used as the email plain text content. If you want to pass HTML, instead use `GmailApp.sendEmail(string,string,string, Object)` Related - [Emailing Google Sheet range (with or without formatting) as a HTML table in a Gmail message](https://stackoverflow.com/q/36529890/1595451)- [Sending an email in HTML and plain with a Gmail Apps Script](https://stackoverflow.com/q/45883782/1595451)- [Google script inject html into html email](https://stackoverflow.com/q/57764976/1595451) References - [https://developers.google.com/apps-script/guides/html](https://developers.google.com/apps-script/guides/html)- [https://developers.google.com/apps-script/guides/html/templates](https://developers.google.com/apps-script/guides/html/templates)- [https://developers.google.com/apps-script/reference/gmail/gmail-app#sendEmail(String,String,String)](https://developers.google.com/apps-script/reference/gmail/gmail-app#sendEmail(String,String,String))
null
CC BY-SA 4.0
null
2022-10-25T01:43:37.520
2022-10-25T01:52:06.677
2022-10-25T01:52:06.677
1,595,451
1,595,451
null
74,188,531
2
null
74,177,982
0
null
Thanks for your suggestion FunThomas, I manage to make it work now based on that code, my code is: ``` Option Explicit Public Sub ResizePicture() Dim sh As Shape, ws2 As Worksheet Dim Cel As Range Dim Rng As Range Set ws2 = Worksheets("Rev.0") Set Rng = Selection For Each Cel In Rng With Cel Set sh = ws2.Shapes(ws2.Shapes.Count) 'get last shape, i.e. pasted picture If .Height / sh.Height < .Width / sh.Width Then sh.ScaleHeight .Height / sh.Height, msoFalse Else sh.ScaleWidth .Width / sh.Width, msoFalse End If End With Next Cel End Sub ```
null
CC BY-SA 4.0
null
2022-10-25T02:21:35.633
2022-10-25T02:21:35.633
null
null
20,173,119
null
74,188,766
2
null
14,885,557
0
null
I met the same problem, and finally found the there is a parameter changed every time. you my solution is: copy and paste the mock parameter and real parameter together, and compare with them , and also make sure your next unit test would generate new parameter.
null
CC BY-SA 4.0
null
2022-10-25T03:14:07.727
2022-10-25T03:14:07.727
null
null
445,908
null
74,189,138
2
null
74,188,947
0
null
As asked, this question is strictly GitLab, i.e., about how goes about showing you the commit graph. That means it should just be tagged with [gitlab](/questions/tagged/gitlab). But there's a more general form of the same question, which is: what the heck is a branch anyway? To see more about this question, read [What exactly do we mean by "branch"?](https://stackoverflow.com/q/25068543/1256452) The graph you've shown here (I edited the question slightly to get it included inline although it's so small that one must generally click through to read it anyway) has two labels attached, namely `develope` (including that final `e`) and `main`. Those your two branch names. So you do have two branches. If you've read the question I linked, you may now realize that you have even more branches. For instance, the initial commit, all the way at the bottom, is by itself a branch, containing one commit. The bottom two commits, treated as a pair, is also a branch. Neither of these branches have , but they are sub-graphs of the overall commit graph. Any connected set of any of these commits can be a "branch". The merge commit four steps down from the top shows where you merged a commit into another commit, forming one branch from two branches. These two separate branches can now be treated as a single branch by using the merge commit as the last commit of that branch. Or, the two parents of that merge commit can be treated as two separate branches, which share the commits from the point where the graphs diverge. These earlier separate branches had names `develope` and `develope`, in two different Git repositories. Although the were spelled identically, these two separate branches were different branches. The merge commit that united them made a new branch, which was named `develope`. The for each of these branches are quite irrelevant! They only matter in terms of , from which Git finds the earlier commits. By identifying any one particular commit a tip commit, you instantly form a branch—whether it has a name or not—from there on backwards. This—the fact that "branch", in Git, isn't really a meaningful term—is why you need to think about a Git repository in terms of , rather than branches.
null
CC BY-SA 4.0
null
2022-10-25T04:29:25.873
2022-10-25T04:29:25.873
null
null
1,256,452
null
74,189,232
2
null
74,188,357
0
null
This is script to invisible layer name ="text" and save after edit: ``` for(var i=0;i<app.documents.length;i++) { var oDoc=app.documents[i]; for(var j=0;j<oDoc.layers.length; j++) { if(oDoc.layers[j].name=="text") { oDoc.layers[j].visible=false; } } oDoc.save(); } ```
null
CC BY-SA 4.0
null
2022-10-25T04:46:11.070
2022-10-25T04:46:11.070
null
null
1,497,597
null
74,189,287
2
null
74,188,947
0
null
All is fine. Your `main` branch displayed, it points at commit: `"fix ajax for register and add validation"` --- It turns out that, in your current situation, `develop` and `main` haven't forked, and `develop` is ahead of `main` (all of that is very normal). So `gitlab` chose to draw only one single history, with the latest commit of `develop` on top and `main` pointing somewhere below -- all git graphical tools I know will do that.
null
CC BY-SA 4.0
null
2022-10-25T05:01:30.443
2022-10-25T06:15:38.203
2022-10-25T06:15:38.203
86,072
86,072
null
74,189,472
2
null
74,184,536
1
null
Check out `actions` inside environment [https://docs.gitlab.com/ee/ci/yaml/#environmentaction](https://docs.gitlab.com/ee/ci/yaml/#environmentaction). There are a few actions which you can use which won't trigger deployment. eg: for build u can use `prepare` ``` job_build_preprod script: - echo $VAULT_PATH environment: name: preprod action: prepare url: https://test.com ```
null
CC BY-SA 4.0
null
2022-10-25T05:33:23.903
2022-10-25T05:33:23.903
null
null
12,289,730
null
74,189,546
2
null
74,187,873
1
null
After analyzing your code - `<form><input type="submit" value="Submit"></form>` - `submit button is outside of the <form> tag`- ``` <!-- BOOTSTRAP FORM MODAL --> <div class="modal" tabindex="-1"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <!-- Title --> </div> <form action="#"> <!--FORM TAG OPEN --> <div class="modal-body"> <!-- INPUT FIELDS--> </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button> <button type="submit" class="btn btn-primary">Save changes</button> </div> <!--FORM TAG CLOSE --> </form> </div> </div> </div> ``` -
null
CC BY-SA 4.0
null
2022-10-25T05:45:09.617
2022-10-25T05:45:09.617
null
null
20,306,839
null
74,189,628
2
null
74,188,944
0
null
I'll answer the ggplot2 question. I assume your data looks like this: ``` set.seed(42) DF <- data.frame(g = c(0, 0, 1, 1), m = rnorm(4, mean = c(1, 1, 2, 2), sd = 0.1), s = rnorm(4, mean = 0.5, sd = 0.1)) # g m s #1 0 1.1370958 0.5404268 #2 0 0.9435302 0.4893875 #3 1 2.0363128 0.6511522 #4 1 2.0632863 0.4905341 ``` Then you can use `stat_function` like this: ``` library(ggplot2) ggplot() + lapply(split(DF, seq_len(nrow(DF))), \(x) stat_function(fun = dnorm, args = list(mean = x$m, sd = x$s), aes(color = factor(x$g)))) + xlim(-1, 4) ``` [](https://i.stack.imgur.com/rtpnU.png) Note that the y axis does not show probability (as your mock-up wrongly shows). If you want to have probability as y, you need to plot the cumulative distribution function (the integral of the probability density function).
null
CC BY-SA 4.0
null
2022-10-25T05:56:51.913
2022-10-25T06:01:56.687
2022-10-25T06:01:56.687
1,412,059
1,412,059
null
74,189,638
2
null
74,188,307
1
null
# In short `Login` should in principle not be a use case. But if you keep it, don’t duplicate it for different actors: prefer the addition association with the same use case, or better refactor your diagram to use generalization. The extension means that `Change Profile Status` may in some situations enrich the behaviors and interactions of `Login`. # More explanations ### 1. Is the login a use-case at all? There is no order among use cases, and a use case should be a reason for the actor to use the system. According to your diagram, an actor could use the system just for the sole purpose of `Login` (really?). Or do a `Managing schedule` without a login (oops). This suggests that the but a constraint, or an action in the activity diagrams that would describe each use case. (By the way, login is often obsolete in an era of Single Sign On that makes it happen behind the scene without user involvement; `autheticate user`) would be the corresponding action) ### 2. Ambiguity of a use cases with several actors The UML specification do not define the semantic of a use case with several actors. It can mean that one among several can perform the use-case (but multiplicity should be set accordingly), or that several or all are involved in the use case, but without telling if it’s at the sale time or one after the other. In any case, it’s ambiguous, even if most of us get the intended meaning right when reading the diagram. On the other side, . ### Is there an alternative? Two popular techniques help to disambiguate having several actors aiming for the sale use-case: - `User``Student``Admin``User`-
null
CC BY-SA 4.0
null
2022-10-25T05:58:20.690
2022-10-27T20:49:52.703
2022-10-27T20:49:52.703
3,723,423
3,723,423
null
74,190,751
2
null
60,325,478
0
null
If you did an WSL 2 backend install of docker (which is now the default option), you need to create a `.wslconfig` file in your user directory. The file should have the following structure: ``` [wsl2] memory=19GB # Limits VM memory in WSL 2 swap=110GB ``` Windows documentation: [https://learn.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig](https://learn.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig)
null
CC BY-SA 4.0
null
2022-10-25T07:55:16.427
2022-10-25T07:55:16.427
null
null
5,304,366
null
74,190,749
2
null
29,155,350
1
null
maybe this solution also help someone... 1. Open Database dialog window from the right side of Intellij 2. Go to DB Data Source Properties (find it in top menu) 3. Go to Schemas 4. Uncheck "Default database" 5. Check your specific DB and inside also check Default schema(public) Good luck!
null
CC BY-SA 4.0
null
2022-10-25T07:55:02.097
2022-10-25T07:55:02.097
null
null
16,730,985
null
74,190,890
2
null
74,187,865
0
null
I tried to remove all options of sub selectlist when you select a item in main selectlist them add all items which were not selected in main selectlist to sub selectlist as below: Models: ``` public class CompanyMember { public int Id { get; set; } public string Name { get; set; } } public class GroupMember { public int Id { get; set; } public int GroupId { get; set; } public string Name { get; set; } } public class SomeViewModel { public SomeViewModel() { CompanyMemberList = new List<CompanyMember>(); GroupMemberList = new List<GroupMember>(); } public List<CompanyMember> CompanyMemberList { get; set; } public List<GroupMember> GroupMemberList { get; set; } } ``` controller: ``` public IActionResult MemberPage() { var vm = new SomeViewModel() { CompanyMemberList = new List<CompanyMember>() { new CompanyMember(){Id=1,Name="jiang"}, new CompanyMember(){Id=2,Name="yang"}, new CompanyMember(){Id=3,Name="li"}, new CompanyMember(){Id=4,Name="wang"}, new CompanyMember(){Id=5,Name="zhang"} }, GroupMemberList = new List<GroupMember>() { new GroupMember(){Id=1,GroupId=1,Name="jiang"}, new GroupMember(){Id=3,GroupId=1,Name="li"}, new GroupMember(){Id=5,GroupId=1,Name="zhang"} } }; return View(vm); } ``` View: @model SomeViewModel ``` @{ var companyselectlist = new SelectList(Model.CompanyMemberList, "Id", "Name"); var groupselectlist = new SelectList(Model.GroupMemberList, "Id", "Name"); } <select asp-for="CompanyMemberList" asp-items="companyselectlist" onchange="dynamicgroup()"></select> <select asp-for=GroupMemberList asp-items="groupselectlist"></select> <script src="~/lib/jquery/dist/jquery.min.js"></script> <script> //get the array of value,text attr of sub selectlist function GetGroupList() { var grouparry = new Array(); $('#GroupMemberList').children().each(function () { var obj = {} obj['v'] = $(this).val() obj['text'] = $(this).text() grouparry.push(obj) }); return grouparry; } var grouparr = GetGroupList(); function dynamicgroup() { //remove all options of sub selectlist $("#GroupMemberList option").remove() var somedic = grouparr // get the value attr of selected options of main selectlist var selectarr=new Array() $("#CompanyMemberList option:selected").each(function () { selectarr.push($(this).val()) }) //add the option item to sub selectlist if not selected in main select list somedic.forEach((item, index, array) => { if (($.inArray(item.v, selectarr)) == -1) { $("#GroupMemberList").append(new Option(item.text, item.v, false, false)) } }) } </script> ``` Result: [](https://i.stack.imgur.com/35nw8.gif)
null
CC BY-SA 4.0
null
2022-10-25T08:06:32.143
2022-10-25T08:42:17.250
2022-10-25T08:42:17.250
18,177,989
18,177,989
null
74,191,043
2
null
61,549,163
0
null
I had exactly the same problem on this subject, then I resolved to use this technique, but the problem is that it cannot directly carry the images, which is a pity (also works for icons...). And I realized that in the error he was talking about the "project name" I changed it at the top of MyForm.h and MyForm.cpp to match that of the project name. and after that it worked I don't know if it will work for you but I hope so and I say goodbye. ;)
null
CC BY-SA 4.0
null
2022-10-25T08:19:54.400
2022-10-25T08:19:54.400
null
null
20,328,350
null
74,191,736
2
null
24,863,164
1
null
This article will be pretty much helpful for your problem. [https://medium.com/safetycultureengineering/analyzing-and-improving-memory-usage-in-go-46be8c3be0a8](https://medium.com/safetycultureengineering/analyzing-and-improving-memory-usage-in-go-46be8c3be0a8) I ran a pprof analysis. pprof is a tool that’s baked into the Go language that allows for analysis and visualisation of profiling data collected from a running application. It’s a very helpful tool that collects data from a running Go application and is a great starting point for performance analysis. I’d recommend running pprof in production so you get a realistic sample of what your customers are doing. When you run pprof you’ll get some files that focus on goroutines, CPU, memory usage and some other things according to your configuration. We’re going to focus on the heap file to dig into memory and GC stats. I like to view pprof in the browser because I find it easier to find actionable data points. You can do that with the below command. `go tool pprof -http=:8080 profile_name-heap.pb.gz` pprof has a CLI tool as well, but I prefer the browser option because I find it easier to navigate. My personal recommendation is to use the flame graph. I find that it’s the easiest visualiser to make sense of, so I use that view most of the time. The flame graph is a visual version of a function’s stack trace. The function at the top is the called function, and everything underneath it is called during the execution of that function. You can click on individual function calls to zoom in on them which changes the view. This lets you dig deeper into the execution of a specific function, which is really helpful. Note that the flame graph shows the functions that consume the most resources so some functions won’t be there. This makes it easier to figure out where the biggest bottlenecks are. Is this helpful?
null
CC BY-SA 4.0
null
2022-10-25T09:15:35.147
2022-10-25T09:15:35.147
null
null
20,201,615
null
74,191,804
2
null
72,104,175
-1
null
Use .then instead of await in your application
null
CC BY-SA 4.0
null
2022-10-25T09:21:07.130
2022-10-25T09:21:07.130
null
null
16,823,353
null
74,191,881
2
null
74,191,300
0
null
I guess this is not possible. The reason is, all columns are generated dynamically based on your data in the table. That's why you need to apply conditions to color cells as you want. Now, as you showed in the sample presentation, if you have that fixed amount of rows and columns with known fixed values in different cells, you can use CARD visual for each individual value to show and then use your expected background.Not a good approach, but the end user will see the expected output :)
null
CC BY-SA 4.0
null
2022-10-25T09:27:56.763
2022-10-25T09:27:56.763
null
null
3,652,345
null
74,192,139
2
null
37,720,122
1
null
I'm just adding one thing added to your accepted answer when you try your code and the above-accepted code, the watermark on the images will be very close together and close together with what I tried. like this [](https://i.stack.imgur.com/oK2c0.jpg) So if you want the watermarks like what you want, you need to modify the code with plus numbers on wmarkheight and wmarkwidth like this: ``` while ($x < $imgWidth) { $y = 0; while($y < $imgHeight) { $imgFileCollection->insert($watermark, 'top-left', $x, $y); $y += $wmarkHeight+30; } $x += $wmarkWidth+40; } ``` this line of code is important: ``` $y += $wmarkHeight+30; $x += $wmarkWidth+40; ``` and you will get the result like that below: [](https://i.stack.imgur.com/cFILh.jpg)
null
CC BY-SA 4.0
null
2022-10-25T09:49:07.143
2022-10-25T09:49:07.143
null
null
16,206,064
null
74,192,446
2
null
74,191,814
0
null
The typical method for edge detection is to take a "derivative" or to convolve with an "edge detection kernel" (e.g. Sobel, Laplacian of Gaussian, ...). For instance, this gives a very simple horizontal derivative: ``` # kernel: (-1, 1) dx = img[:, 1:] - img[:, :-1] ``` Second order operator: ``` # kernel: (-1, 2, -1) dx = -img[:, 2:] + 2 * img[:, 1:-1] - img[:, :-2] ``` ...which incidentally creates a map of all the vertical edges. To convert to a boolean map, one may follow up with a thresholding operation on the result: ``` edge = dx > THRESHOLD ``` Typical Laplacian of Gaussian kernel: ``` edge = ( 0 * img[:-2, :-2] + -1 * img[:-2, 1:-1] + 0 * img[:-2, 2:] + -1 * img[1:-1, :-2] + 4 * img[1:-1, 1:-1] + -1 * img[1:-1, 2:] + 0 * img[2:, :-2] + -1 * img[2:, 1:-1] + 0 * img[2:, 2:] ) ``` If you're not restricted to linear operators, you can even do stuff like `edges = abs(dx) + abs(dy)`... just be careful of phase shifts. --- Another "non-linear" method as suggested by @Yves is to use 4-connected neighbors. One possible implementation: ``` img = (img >= 128).astype(np.uint8) img_p = np.pad(img, 1, mode='edge') connectivity = ( 0 * img_p[ :-2, :-2] + 1 * img_p[ :-2, 1:-1] + 0 * img_p[ :-2, 2:] + 1 * img_p[1:-1, :-2] + 0 * img_p[1:-1, 1:-1] + 1 * img_p[1:-1, 2:] + 0 * img_p[2:, :-2] + 1 * img_p[2:, 1:-1] + 0 * img_p[2:, 2:] ) edge = (img == 1) & (connectivity >= 1) & (connectivity < 4) ```
null
CC BY-SA 4.0
null
2022-10-25T10:12:15.513
2022-10-25T21:33:54.460
2022-10-25T21:33:54.460
365,102
365,102
null
74,192,496
2
null
73,272,301
0
null
You need to return just the `hub_challenge`. This worked for me `return $req['hub_challenge'];`
null
CC BY-SA 4.0
null
2022-10-25T10:16:52.877
2022-10-31T04:44:45.247
2022-10-31T04:44:45.247
5,559,590
18,809,514
null
74,192,690
2
null
74,192,355
0
null
Try following ``` Sub TickToggle() On Error GoTo EH Dim caller As Variant caller = Application.caller If Not IsError(caller) Then ActiveSheet.Shapes(caller).ZOrder msoSendToBack Select Case caller Case "Checkbox1_tick": ' your code for when the checkbox1 is unticked Case "Checkbox1_untick": ' your code for when the checkbox1 is ticked End Select End If EH: End Sub ``` Also, try using more meaningful names for those shapes (like above) in order to use select case on the caller value and do different things depending on what has been clicked. You can assign `TickToggle` to all checkbox shapes, this is preferred (re-usability).
null
CC BY-SA 4.0
null
2022-10-25T10:32:45.510
2022-10-25T10:52:12.110
2022-10-25T10:52:12.110
20,076,134
20,076,134
null
74,192,768
2
null
74,191,814
1
null
An "edge" pixel is characterized by a simple property: it is black and at least one of its neighbors is white (you can choose between the 4- or 8- connected neighbors). This is fairly simple to implement. On the sides of the image, consider that "outer" pixels are white.
null
CC BY-SA 4.0
null
2022-10-25T10:40:01.563
2022-10-25T12:59:35.973
2022-10-25T12:59:35.973
null
null
null
74,193,754
2
null
23,223,017
0
null
I created a cache dir for symbols and loaded them all. The Modules windows told me that the proper PDB loaded OK. But I still couldn not step into MFC. After playing with other solutions, like adding symbols cache and mfc lib directories to symbol locations, linking statically (which worked), it appeared that simply checking did the trick. I tried stepping into CWinAppEx::InitInstance and now I can. [](https://i.stack.imgur.com/fIvxL.png) Configuration: MSVS2022 Pro, Windows10, use MFC as a dynamic library.
null
CC BY-SA 4.0
null
2022-10-25T12:00:24.200
2022-10-25T12:00:24.200
null
null
12,390,193
null
74,194,020
2
null
30,974,433
0
null
The best solution I can find is Resharper, you can then run the tool's analysis and look for "Type member is never used". I know this is not ideal but it's the best solution I can find.
null
CC BY-SA 4.0
null
2022-10-25T12:24:19.197
2022-10-25T12:24:19.197
null
null
1,250,250
null
74,194,136
2
null
74,194,054
2
null
When reading a CSV file where a field as comma-delimited numbers, it will always be string, it cannot be read as a number. This is because "what number" is ambiguous, so R will make you choose. There are such things as list-columns that will allow each field to be a list of numbers, but most vector-based functions will not work as smoothly on them. Since it is likely to be read as numbers, we can split it manually with this: ``` dat <- data.frame(Votes = c("4,5,3,5,1,4,4,5,6", "4,4,3,5,4,3,5,4", "5,4,6,5,3,4,1,4,6")) dat # Votes # 1 4,5,3,5,1,4,4,5,6 # 2 4,4,3,5,4,3,5,4 # 3 5,4,6,5,3,4,1,4,6 dat$Votes_mu <- sapply(strsplit(dat$Votes, ","), function(z) mean(as.numeric(z))) dat # Votes Votes_mu # 1 4,5,3,5,1,4,4,5,6 4.111111 # 2 4,4,3,5,4,3,5,4 4.000000 # 3 5,4,6,5,3,4,1,4,6 4.222222 ``` Note: if there are any non-numbers (non-response or non-numeric characters), then `mean(as.numeric(.))` will produce an `NA`. If you want to ignore the non-numbers, then change the inner code to `mean(as.numeric(z), na.rm = TRUE)`. --- ## Extra: list-columns FYI, the "list-column" thing in R can be a good way to store multiple values, especially when (as in this example) there are different amounts per row. In this case, we can do something like: ``` dat$Votes2 <- lapply(strsplit(dat$Votes, ","), as.numeric) dat # Votes Votes2 # 1 4,5,3,5,1,4,4,5,6 4, 5, 3, 5, 1, 4, 4, 5, 6 # 2 4,4,3,5,4,3,5,4 4, 4, 3, 5, 4, 3, 5, 4 # 3 5,4,6,5,3,4,1,4,6 5, 4, 6, 5, 3, 4, 1, 4, 6 ``` And while they look really similar (albeit spaces), you can see the `str`uctural differences: ``` str(dat) # 'data.frame': 3 obs. of 2 variables: # $ Votes : chr "4,5,3,5,1,4,4,5,6" "4,4,3,5,4,3,5,4" "5,4,6,5,3,4,1,4,6" # $ Votes2:List of 3 # ..$ : num 4 5 3 5 1 4 4 5 6 # ..$ : num 4 4 3 5 4 3 5 4 # ..$ : num 5 4 6 5 3 4 1 4 6 ``` This might be helpful if you need to find specific tokens (votes) in each element. For instance, if you wanted to know which rows had at least one `1` vote, then one could do: ``` sapply(dat$Votes2, function(z) 1 %in% z) # [1] TRUE FALSE TRUE ``` (where this method does not work with `dat$Votes`). Yes, this one example could use regex to find `1`s in `Votes`, so perhaps list-columns aren't necessary for your use-case.
null
CC BY-SA 4.0
null
2022-10-25T12:32:10.210
2022-10-25T12:37:28.050
2022-10-25T12:37:28.050
3,358,272
3,358,272
null
74,194,475
2
null
74,194,054
0
null
Here are two equivalent ways. ``` dat <- data.frame(Votes = c("4,5,3,5,1,4,4,5,6", "4,4,3,5,4,3,5,4", "5,4,6,5,3,4,1,4,6")) sapply(dat$Votes, \(x) mean(scan(textConnection(x), sep = ",")), USE.NAMES = FALSE) #> [1] 4.111111 4.000000 4.222222 dat$Votes |> sapply(\(x) x |> textConnection() |> scan(sep = ",") |> mean(), USE.NAMES = FALSE) #> [1] 4.111111 4.000000 4.222222 ``` [reprex v2.0.2](https://reprex.tidyverse.org)
null
CC BY-SA 4.0
null
2022-10-25T12:58:02.680
2022-10-25T12:58:02.680
null
null
8,245,406
null
74,194,591
2
null
74,194,054
0
null
Using `DF` in the Note at the end, scan each comma separated value by row (which for each row will create a numeric vector of those numbers) and then take the mean of that vector. ``` library(dplyr) DF %>% rowwise %>% mutate(mean = mean(scan(text = y, sep = ",", quiet = TRUE))) %>% ungroup ``` giving: ``` # A tibble: 3 × 3 x y mean <int> <chr> <dbl> 1 1 1,2,3 2 2 2 4,5 4.5 3 3 7 7 ``` Using only base R: ``` Mean <- function(x) mean(scan(text = x, sep = ",", quiet = TRUE)) transform(DF, mean = sapply(y, Mean)) ``` Another base R approach: ``` transform(DF, mean = rowMeans(read.table(text = y, fill = TRUE, sep = ","), na.rm = TRUE)) ``` ## Note ``` DF <- data.frame(x = 1:3, y = c("1,2,3", "4,5", "7")) > DF x y 1 1 1,2,3 2 2 4,5 3 3 7 ```
null
CC BY-SA 4.0
null
2022-10-25T13:07:02.183
2022-10-25T13:29:16.120
2022-10-25T13:29:16.120
516,548
516,548
null
74,194,629
2
null
74,089,486
0
null
You can achieve it by using XPath: As your element does not have the Id you can use other attributes or target the element by its value. e.g. ``` driver.FindElement(By.XPath("//textarea[contains(text(),'text you are looking for...')]")) ``` You can also target the element including its parent element: ``` driver.FindElement(By.XPath("//div/textarea[contains(text(),'text you are looking for...')]")) ``` Also here is a useful topic you can use to get understand the syntax of XPaths - [XPath Syntax W3](https://www.w3schools.com/xml/xpath_syntax.asp)
null
CC BY-SA 4.0
null
2022-10-25T13:09:38.440
2022-10-25T13:09:38.440
null
null
9,099,080
null
74,194,661
2
null
74,176,029
0
null
I generally don't like messing with a grid at a local level (only as a global override). Instead, use the grid as intended and modify the , say with negative margins to negate the grid spacing. Better, just don't use rows and columns. Use basic [flexbox](https://getbootstrap.com/docs/5.2/utilities/flex/). Here I'm extending Bootstrap's [sizing utilities](https://getbootstrap.com/docs/5.2/utilities/sizing/#relative-to-the-parent) with a custom class for 1/3 width above the `md` [breakpoint](https://getbootstrap.com/docs/5.2/layout/breakpoints/#available-breakpoints). I'm also using `m-0` on the figures to eliminate their bottom margin. ``` @media (min-width: 768px) { body .w-md-33 { width: calc(100% / 3) !important; } } ``` ``` <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous"> <div class="d-flex flex-wrap"> <div class="w-50 w-md-33 algo bg-primary"> <figure class="m-0"> <img class="img-fluid ai" src="https://via.placeholder.com/800x300" /> </figure> </div> <div class="w-50 w-md-33 algo bg-primary"> <figure class="m-0"> <img class="img-fluid ai" src="https://via.placeholder.com/800x300" /> </figure> </div> <div class="w-50 w-md-33 algo bg-primary"> <figure class="m-0"> <img class="img-fluid ai" src="https://via.placeholder.com/800x300" /> </figure> </div> <div class="w-50 w-md-33 algo bg-primary"> <figure class="m-0"> <img class="img-fluid ai" src="https://via.placeholder.com/800x300" /> </figure> </div> <div class="w-50 w-md-33 algo bg-primary"> <figure class="m-0"> <img class="img-fluid ai" src="https://via.placeholder.com/800x300" /> </figure> </div> <div class="w-50 w-md-33 algo bg-primary"> <figure class="m-0"> <img class="img-fluid ai" src="https://via.placeholder.com/800x300" /> </figure> </div> </div> ```
null
CC BY-SA 4.0
null
2022-10-25T13:12:55.570
2022-10-25T13:24:55.163
2022-10-25T13:24:55.163
1,264,804
1,264,804
null
74,194,905
2
null
74,187,905
0
null
I simulated some data here to help make a plot more like your real case. To get the colors assigned by the `Value` you need to assign `color` inside `aes()`. `geom_line()` takes a `group` aesthetic which defines which data points are connected. If you specify a categorical aesthetic like `color = Value > 1.35` then the `group` aesthetic will inherit from there and split into 2 lines. If you want to connect all the data points, you simply have to set `group = 1` to specify that you want them all connected together. There are a few stylistic things in your code I'd also suggest to change: - `data.frames``merge``rbind``data.frame`- `ggplot(aes(x = ..., y = ..., color = ...))``geom_*` ``` library(tidyverse) library(lubridate) set.seed(8) d <- data.frame(Timestamp = seq(ymd("2021-10-20"), ymd("2022-10-24"), by = "days"), Value = 1.35 + cumsum(runif(370, -1, 1))) d %>% ggplot(aes(x = Timestamp, y = Value, color = Value > 1.35)) + geom_point() + geom_line(aes(group = 1)) + scale_color_manual(values=c("Steel blue","plum")) ``` ![](https://i.imgur.com/HMq6FnH.png) [reprex v2.0.2](https://reprex.tidyverse.org)
null
CC BY-SA 4.0
null
2022-10-25T13:33:19.473
2022-10-26T13:04:57.850
2022-10-26T13:04:57.850
13,210,554
13,210,554
null