pkiage commited on
Commit
8a4511f
·
1 Parent(s): 74f6cdb

docs: move roadmap & PESTLE from README.md

Browse files

- Move roadmap to issues
- Move PESTLE to docs/PESTLE.md
- Add Heroku Autodeploy docs

Files changed (2) hide show
  1. README.md +3 -49
  2. docs/PESTLE.md +26 -0
README.md CHANGED
@@ -104,31 +104,11 @@ Dedicated: no
104
 
105
  Sleeps: yes
106
 
107
- # Roadmap
108
-
109
- Models:
110
-
111
- - [ ] Add LightGBM
112
- - [ ] Add Adabost
113
- - [ ] Add Random Forest
114
-
115
- Visualization:
116
 
117
- - [ ] Add decision surface plot(s)
118
-
119
- Documentation:
120
-
121
- - [x] Add getting started and usage documentation
122
- - [ ] Add documentation evaluating models
123
- - [ ] Add design rationale(s)
124
-
125
- Other:
126
 
127
- - [x] Deploy app
128
- - [ ] Add csv file data input
129
- - [ ] Add tests
130
- - [ ] Add test/code coverage badge
131
- - [ ] Add continuous integration badge
132
 
133
  # Docs creation
134
 
@@ -222,29 +202,3 @@ code2flow src/models/util_model_comparison.py -o docs/call-graph/util_model_comp
222
  [GraphViz Buildpack](https://github.com/weibeld/heroku-buildpack-graphviz)
223
 
224
  - Buildpack used for Heroku deployment
225
-
226
- ## Political, Economic, Social, Technological, Legal and Environmental(PESTLE):
227
-
228
- [LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN)
229
-
230
- > "(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons."
231
-
232
- [Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence](https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682)
233
-
234
- > "High-risk AI systems will be subject to strict obligations before they can be put on the market:
235
- >
236
- > - Adequate risk assessment and mitigation systems;
237
- > - High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
238
- > - Logging of activity to ensure traceability of results;
239
- > - Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
240
- > - Clear and adequate information to the user;
241
- > - Appropriate human oversight measures to minimise risk;
242
- > - High level of robustness, security and accuracy."
243
-
244
- [A list of open problems in DeFi](https://mirror.xyz/0xemperor.eth/0guEj0CYt5V8J5AKur2_UNKyOhONr1QJaG4NGDF0YoQ?utm_source=tldrnewsletter)
245
-
246
- - Automated risk scoring of lending borrowing pools -> Increasingly important problem
247
- - One alternative way of looking at the problem would be, looking at a function for calculating the probability of default given the pool of assets you have.
248
- - Managing Risk for lenders and distributing risk/ Undercollateralized Loans
249
- - Tradfi is plagued by NPAs [(Nonperforming assets)] but still ultimately fall back to some sort of credit score establishment [[Spectral finance](https://www.spectral.finance/) solving this, but still an open problem].
250
- - But still, most credit score methods would rely on onchain history for credit establishment, we are moving towards privacy-centric defi is this approach extendable to that idea? [Homomorphic encryption could provide a solution]
 
104
 
105
  Sleeps: yes
106
 
107
+ [Enabled Autodeploy from Github](https://devcenter.heroku.com/articles/github-integration)
 
 
 
 
 
 
 
 
108
 
109
+ # Roadmap
 
 
 
 
 
 
 
 
110
 
111
+ To view/submit ideas as well as contribute please view issues.
 
 
 
 
112
 
113
  # Docs creation
114
 
 
202
  [GraphViz Buildpack](https://github.com/weibeld/heroku-buildpack-graphviz)
203
 
204
  - Buildpack used for Heroku deployment
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/PESTLE.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ## Political, Economic, Social, Technological, Legal and Environmental(PESTLE):
3
+
4
+ [LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN)
5
+
6
+ > "(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons."
7
+
8
+ [Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence](https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682)
9
+
10
+ > "High-risk AI systems will be subject to strict obligations before they can be put on the market:
11
+ >
12
+ > - Adequate risk assessment and mitigation systems;
13
+ > - High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
14
+ > - Logging of activity to ensure traceability of results;
15
+ > - Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
16
+ > - Clear and adequate information to the user;
17
+ > - Appropriate human oversight measures to minimise risk;
18
+ > - High level of robustness, security and accuracy."
19
+
20
+ [A list of open problems in DeFi](https://mirror.xyz/0xemperor.eth/0guEj0CYt5V8J5AKur2_UNKyOhONr1QJaG4NGDF0YoQ?utm_source=tldrnewsletter)
21
+
22
+ - Automated risk scoring of lending borrowing pools -> Increasingly important problem
23
+ - One alternative way of looking at the problem would be, looking at a function for calculating the probability of default given the pool of assets you have.
24
+ - Managing Risk for lenders and distributing risk/ Undercollateralized Loans
25
+ - Tradfi is plagued by NPAs [(Nonperforming assets)] but still ultimately fall back to some sort of credit score establishment [[Spectral finance](https://www.spectral.finance/) solving this, but still an open problem].
26
+ - But still, most credit score methods would rely on onchain history for credit establishment, we are moving towards privacy-centric defi is this approach extendable to that idea? [Homomorphic encryption could provide a solution]