added final changes to repo

This commit is contained in:
mt 2025-02-18 16:02:34 +03:00
commit 9be65e379b
33 changed files with 2208 additions and 0 deletions

92
README.md Normal file
View File

@ -0,0 +1,92 @@
# logstash
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlablex.ibasis.net/simfony/adrian_transition/qa-ops/logstash.git
git branch -M production
git push -uf origin production
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlablex.ibasis.net/simfony/adrian_transition/qa-ops/logstash/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.

View File

@ -0,0 +1,89 @@
# intake.conf
input { pipeline { address => "Simfony_Mobility_Logs" } }
filter {
if "ocs" in [tags] {
clone {
clones => ["notification-ocs"]
add_tag => [ "notification-ocs" ]
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
if "notification-ocs" in [tags] {
# ruby {
# code => 'puts "Input rule matched: contains ocs-notification-v1"'
# }
if [message] =~ /\[OCS-NOTIFICATION\]/ {
# Keep only lines containing "notification-v1"
if [message] =~ /v2_simfony|v2_advancedtracking|v2_zariot|v2_v2_alwaysconnected|v2_ulink|v2_ipvisie|v2_ip3labs|v2_aec_skyline|v2_mondicon|v2_peoplefone_deu/ {
# simfony
mutate {
add_tag => ["notification_simfony"]
}
} else if [message] =~ /v2_ibasis_ibasis|v2_ibasis_sales_demo|v2_combonet|v2_global_operator|v2_imatrixsys|v2_v2_business_iot|v2_infisim|v2_thinglabs|v2_athalos|v2_pkcloud|v2_fidenty/ {
# ibasis
mutate {
add_tag => ["notification_ibasis"]
}
}
} else {
drop {} # Drop all other lines
}
}
} else if "diameter" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "hlr" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "dra" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "hss" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "aaa" in [tags] {
clone {
clones => ["notification-aaa"]
add_tag => [ "notification-aaa" ]
}
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"message" => [
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{FREERADIUS_LOGTYPE:log-plugin}:%{SPACE}%{GREEDYDATA:log-message}",
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{GREEDYDATA:log-message}"
]
}
}
if "notification-aaa" in [tags] {
if [message] =~ /notification-v1/ {
# Keep only lines containing "notification-v1"
if [message] =~ /v2_simfony|v2_advancedtracking|v2_zariot|v2_v2_alwaysconnected|v2_ulink|v2_ipvisie|v2_ip3labs|v2_aec_skyline|v2_mondicon|v2_peoplefone_deu/ {
# simfony
mutate {
add_tag => ["notification_simfony"]
}
} else if [message] =~ /v2_ibasis_ibasis|v2_ibasis_sales_demo|v2_combonet|v2_global_operator|v2_imatrixsys|v2_v2_business_iot|v2_infisim|v2_thinglabs|v2_athalos|v2_pkcloud|v2_fidenty/ {
# ibasis
mutate {
add_tag => ["notification_ibasis"]
}
}
} else {
drop {} # Drop all other lines
}
}
} else if "meveo" in [tags] {
grok {
match => { "message" => "%{TIME:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:issuer}\]%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
}
}

View File

@ -0,0 +1,106 @@
# output_simfony_network_log
#input { pipeline { address => "output-simfony-network-log" }}
#filter {
# mutate {
# remove_field => [ "@timestamp" ]
# }
#}
output {
if "notification_simfony" in [tags] {
kafka {
bootstrap_servers => "10.5.48.47:9092"
topic_id => "notification_simfony"
codec => json
}
} else if "notification_ibasis" in [tags] {
kafka {
bootstrap_servers => "10.5.48.47:9092"
topic_id => "notification_ibasis"
codec => json
}
}
# else if "ocs" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-ocs-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# }
#else if "diameter" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-ocs-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# } else if "hlr" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-hlr-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# }
#else if "notification" in [tags] {
# kafka {
# bootstrap_servers => "10.12.174.50:9092"
# topic_id => "testnotification"
# codec => json
# }
# }
# else if "aaa" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-aaa-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# } else if "dra" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-dra-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "000001"
# }
# } else if "hss" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-hss-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "000001"
# }
# } else if "meveo" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-meveo-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "000001"
# }
# }
}

31
bck_stuff/syslog.conf Normal file
View File

@ -0,0 +1,31 @@
input {
syslog {
port => 6005
grok_pattern => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
id => "syslog"
}
}
filter {
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM dd HH:mm:ss", "MMM dd HH:mm:ss" ]
target => "syslog_timestamp"
}
mutate {
remove_field => [ "severity", "severity_label", "priority", "facility", "message", "@timestamp" ]
}
}
output {
elasticsearch {
hosts => ["http://es01:9200","http://es02:9200","http://es03:9200"]
user => "elastic"
password => "xW8DTQG69Zrxy7hx"
ilm_enabled => true
ilm_rollover_alias => "simfony-syslog"
ilm_policy => "simfony-syslog"
ilm_pattern => "000001"
}
}

View File

@ -0,0 +1,89 @@
# intake.conf
input { pipeline { address => "Simfony_Mobility_Logs" } }
filter {
if "ocs" in [tags] {
clone {
clones => ["notification-ocs"]
add_tag => [ "notification-ocs" ]
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
if "notification-ocs" in [tags] {
# ruby {
# code => 'puts "Input rule matched: contains ocs-notification-v1"'
# }
if [message] =~ /\[OCS-NOTIFICATION\]/ {
# Keep only lines containing "notification-v1"
if [message] =~ /v2_simfony|v2_advancedtracking|v2_zariot|v2_v2_alwaysconnected|v2_ulink|v2_ipvisie|v2_ip3labs|v2_aec_skyline|v2_mondicon|v2_peoplefone_deu/ {
# simfony
mutate {
add_tag => ["notification_simfony"]
}
} else if [message] =~ /v2_ibasis_ibasis|v2_ibasis_sales_demo|v2_combonet|v2_global_operator|v2_imatrixsys|v2_v2_business_iot|v2_infisim|v2_thinglabs|v2_athalos|v2_pkcloud|v2_fidenty/ {
# ibasis
mutate {
add_tag => ["notification_ibasis"]
}
}
} else {
drop {} # Drop all other lines
}
}
} else if "diameter" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "hlr" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "dra" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "hss" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
} else if "aaa" in [tags] {
clone {
clones => ["notification-aaa"]
add_tag => [ "notification-aaa" ]
}
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"message" => [
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{FREERADIUS_LOGTYPE:log-plugin}:%{SPACE}%{GREEDYDATA:log-message}",
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{GREEDYDATA:log-message}"
]
}
}
if "notification-aaa" in [tags] {
if [message] =~ /notification-v1/ {
# Keep only lines containing "notification-v1"
if [message] =~ /v2_simfony|v2_advancedtracking|v2_zariot|v2_v2_alwaysconnected|v2_ulink|v2_ipvisie|v2_ip3labs|v2_aec_skyline|v2_mondicon|v2_peoplefone_deu/ {
# simfony
mutate {
add_tag => ["notification_simfony"]
}
} else if [message] =~ /v2_ibasis_ibasis|v2_ibasis_sales_demo|v2_combonet|v2_global_operator|v2_imatrixsys|v2_v2_business_iot|v2_infisim|v2_thinglabs|v2_athalos|v2_pkcloud|v2_fidenty/ {
# ibasis
mutate {
add_tag => ["notification_ibasis"]
}
}
} else {
drop {} # Drop all other lines
}
}
} else if "meveo" in [tags] {
grok {
match => { "message" => "%{TIME:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:issuer}\]%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
}
}

View File

@ -0,0 +1,9 @@
# Meveo date only use TIME. This filter add concatenate the date of the day and the time of meveo log
filter {
if "meveo" in [tags] {
mutate {
replace => { "timestamp" => "%{+YYYY-MM-dd}T%{timestamp}Z" }
}
}
}

View File

@ -0,0 +1,49 @@
# sanitize-simfony-network-log.conf
#input { pipeline { address => "sanitize-simfony-netork-log" } }
filter {
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"log-message" => "IMSI\s*=\s*(%{IMSI:imsi})"
}
tag_on_failure => []
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601", "EEE MMM dd HH:mm:ss YYY", "EEE MMM d HH:mm:ss YYYY"]
target => "timestamp"
}
mutate {
split => ["[host][name]", "."]
replace => ["[host][name]", "%{[host][name][0]}"]
rename => { "[host][name]" => "hostname" }
}
mutate {
remove_field => [ "@timestamp" ]
remove_field => [ "message" ]
remove_field => [ "[agent]" ]
remove_field => [ "[ecs][version]" ]
remove_field => [ "[host][architecture]" ]
remove_field => [ "[host][containerized]" ]
remove_field => [ "[host][hostname]" ]
remove_field => [ "[host][name]" ]
remove_field => [ "[host][id]" ]
remove_field => [ "[host][mac]" ]
remove_field => [ "[host][os][name]" ]
remove_field => [ "[host][os][codename]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[host][os][platform]" ]
remove_field => [ "[host][os][version]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[log][offset]"]
}
}
#output { pipeline { send_to => "output-simfony-network-log" } }

View File

@ -0,0 +1,106 @@
# output_simfony_network_log
#input { pipeline { address => "output-simfony-network-log" }}
#filter {
# mutate {
# remove_field => [ "@timestamp" ]
# }
#}
output {
if "notification_simfony" in [tags] {
kafka {
bootstrap_servers => "172.20.110.222:9092"
topic_id => "notification_simfony"
codec => json
}
} else if "notification_ibasis" in [tags] {
kafka {
bootstrap_servers => "172.20.110.222:9092"
topic_id => "notification_ibasis"
codec => json
}
}
# else if "ocs" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-ocs-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# }
#else if "diameter" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-ocs-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# } else if "hlr" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-hlr-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# }
#else if "notification" in [tags] {
# kafka {
# bootstrap_servers => "10.12.174.50:9092"
# topic_id => "testnotification"
# codec => json
# }
# }
# else if "aaa" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-aaa-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "{now/d}-1"
# }
# } else if "dra" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-dra-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "000001"
# }
# } else if "hss" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-hss-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "000001"
# }
# } else if "meveo" in [tags] {
# elasticsearch {
# hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
# user => "logstash_internal"
# password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
# ilm_enabled => true
# ilm_rollover_alias => "simfony-mobility-meveo-log"
# ilm_policy => "simfony-log-hot-warm"
# ilm_pattern => "000001"
# }
# }
}

137
conf.d/auditlogs.conf Normal file
View File

@ -0,0 +1,137 @@
input {
tcp {
port => 5555
codec => json
}
}
filter {
grok {
match => { "message" => "\[%{WORD:tenant}\]" }
}
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {"message" => "userId\s*=\s*(%{USER:user})"}
tag_on_failure => []
}
translate {
field => "X-Operation-Name"
destination => "category"
dictionary => {
"ProductResource.getAllProductsWithDescriptionAndTechnicalId" => "product"
"ProductResource.importAll" => "product"
"ProductResource.getAutoCompletOpt" => "product"
"ProductResource.createProductsPair" => "product"
"ProductResource.getProduct" => "product"
"ProductResource.assignProduct" => "product"
"ProductResource.deleteProduct" => "product"
"ProductResource.getAllProducts" => "product"
"ProductResource.getEsProducts" => "product"
"ProductResource.countProducts" => "product"
"ProductResource.createProduct" => "product"
"ProductResource.unassignProduct" => "product"
"ProductResource.updateProductsPair" => "product"
"ProductResource.updateProduct" => "product"
"SimCardResource.searchSimCardsInEs" => "sim"
"SimCardResource.getDataLimit" => "sim"
"SimCardResource.getDataLimitBalance" => "sim"
"SimCardResource.countSimCards" => "sim"
"SimCardResource.getAllSimCards" => "sim"
"SimCardResource.getSimCard" => "sim"
"SimCardResource.getUsage" => "sim"
"SimCardResource.getSimCardStatistics" => "sim"
"SimCardResource.simCardPing" => "sim"
"SimCardResource.getSimNetworkStatusGroupedByCountry" => "sim"
"SimCardResource.importSimCardByCSV" => "sim"
"SimCardResource.getLiveUsage" => "sim"
"SimCardResource.provisionSimCard" => "sim"
"SimCardResource.updateSimCards" => "sim"
"SimCardResource.getAllSimCardsByProdInstaParam" => "sim"
"SimCardResource.syncDbWithEs" => "sim"
"SimCardResource.setDataLimit" => "sim"
"SimCardResource.updateSimAfterNetworkEvent" => "sim"
"SimCardResource.getTotalUsage" => "sim"
"SimCardResource.validateSimCSV" => "sim"
"SimCardResource.simCardLocationReset" => "sim"
"SimCardResource.exportCSV" => "sim"
"SimCardResource.getSimCardForNotifications" => "sim"
"SimCardResource.provisionSimCardByCSV" => "sim"
"SimCardResource.updateSimCard" => "sim"
"SimCardResource.getAutoCompletOpt" => "sim"
"SimCardResource.createSimCard" => "sim"
"SimCardResource.getSimsForProvision" => "sim"
"TechnicalProductResource.updateTechnicalProduct" => "technical_product"
"TechnicalProductResource.createTechnicalProduct" => "technical_product"
"TechnicalProductResource.deleteTechnicalProduct" => "technical_product"
"TechnicalProductResource.getAllTechnicalProducts" => "technical_product"
"TechnicalProductResource.getTechnicalProduct" => "technical_product"
"OrderResource.getOrders" => "order"
"OrderResource.update" => "order"
"OrderResource.simCardAction" => "order"
"OrderResource.moveToSimCardBillingAccount" => "order"
"OrderResource.getFailedItemsForOrders" => "order"
"OrderResource.fillOrderWithSimCards" => "order"
"OrderResource.createSimOrder" => "order"
"OrderResource.getBatchFileForOrders" => "order"
"OrderResource.changeStatus" => "order"
"OrderResource.getOrder" => "order"
"OrderResource.batchOperation" => "order"
"OrderResource.getOrderStatus" => "order"
"OrderResource.getBatchOrders" => "order"
"OrderResource.changeSimCardPlan" => "order"
"CustomerProfileResource.updateCustomFields" => "customer"
"CustomerProfileResource.createBillingAccount" => "customer"
"CustomerProfileResource.updateUser" => "customer"
"CustomerProfileResource.getCustomers" => "customer"
"CustomerProfileResource.deleteFile" => "customer"
"CustomerProfileResource.getUsers" => "customer"
"CustomerProfileResource.saveContract" => "customer"
"CustomerProfileResource.endTrialPeriod" => "customer"
"CustomerProfileResource.uploadFileForAccount" => "customer"
"CustomerProfileResource.sendOnboardingDetails" => "customer"
"CustomerProfileResource.extendTrialPeriod" => "customer"
"CustomerProfileResource.assignUploadedFiles" => "customer"
"CustomerProfileResource.uploadFile" => "customer"
"CustomerProfileResource.getCurrentContactAddress" => "customer"
"CustomerProfileResource.getBatchFileForOrders" => "customer"
"CustomerProfileResource.updateBillingAccount" => "customer"
"CustomerProfileResource.deleteBillingAccount" => "customer"
"CustomerProfileResource.getCustomFields" => "customer"
"CustomerProfileResource.createUser" => "customer"
"CustomerProfileResource.getBillingAccounts" => "customer"
"CustomerProfileResource.updateAssignedPlansProducts" => "customer"
"CustomerProfileResource.deleteUser" => "customer"
"CustomerProfileResource.getDocuments" => "customer"
"CustomerProfileResource.acceptContract" => "customer"
"CustomerProfileResource.updateContactAddress" => "customer"
"CustomerProfileResource.getContract" => "customer"
"CustomerProfileResource.getAssignedPlansProducts" => "customer"
"CustomerProfileResource.processAccount" => "customer"
"CustomerProfileResource.getContactAddress" => "customer"
"CustomerProfileResource.rejectContract" => "customer"
}
fallback => "unknown"
}
dissect {
mapping => {
"message" => "%{?drop} payload=%{payload_json}, %{?drop}"
}
}
#json {
# source => "payload_json"
# target => "payload_object"
# }
}
output {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "audit-logs-7"
}
# file { path => "/home/ubuntu/auditlogs-test-logstash"}
}

156
conf.d/billing.conf Normal file
View File

@ -0,0 +1,156 @@
# billing.conf
#
input {
jdbc{
jdbc_driver_library => "/etc/logstash/libs/postgresql-42.2.20.jar"
jdbc_driver_class => "org.postgresql.Driver"
schedule => "* * * * *"
jdbc_connection_string => "jdbc:postgresql://pg.billing.simfony.rocks/dev_opencell"
jdbc_user => "meveo"
jdbc_password => "cg2yFnRnDeAqwUH8"
statement => "select * from vw_billing_wallet_operation_with_edr where created > :sql_last_value"
use_column_value => true
tracking_column => "created"
tracking_column_type => "timestamp"
last_run_metadata_path => "/home/logstash/.logstash_jdbc_last_run_wallet_view"
}
}
filter
{
clone {
clones => ['wallet_operation', 'edr']
}
if [type] == 'wallet_operation' {
prune {
whitelist_names => [
'id',
'operation_type',
'version',
'created',
'updated',
'code',
'description',
'amount_tax',
'amount_with_tax',
'amount_without_tax',
'end_date',
'offer_code',
'operation_date',
'parameter_1',
'parameter_2',
'parameter_3',
'quantity',
'start_date',
'status',
'subscription_date',
'tax_percent',
'credit_debit_flag',
'unit_amount_tax',
'unit_amount_with_tax',
'unit_amount_without_tax',
'charge_instance_id',
'counter_id',
'currency_id',
'priceplan_id',
'reratedwalletoperation_id',
'seller_id',
'wallet_id',
'reservation_id',
'invoicing_date',
'input_quantity',
'input_unit_description',
'rating_unit_description',
'edr_id',
'order_number',
'parameter_extra',
'raw_amount_without_tax',
'raw_amount_with_tax',
'invoice_sub_category_id',
'subscription_id',
'tax_id',
'rated_transaction_id',
'service_instance_id',
'offer_id',
'input_unitofmeasure',
'rating_unitofmeasure',
'tax_class_id',
'uuid',
'cf_values',
'cf_values_accum',
'sort_index',
'billing_account_id',
'billing_run_id',
'billing_cycle_id',
'access_user_id'
]
}
mutate {
add_field => { "[@metadata][type]" => "wallet_operation" }
}
}
else if [type] == 'edr' {
prune {
whitelist_names => [
'edr_id',
'edr_version',
'edr_created',
'edr_event_date',
'edr_last_updated',
'edr_origin_batch',
'edr_origin_record',
'edr_parameter_1',
'edr_parameter_2',
'edr_parameter_3',
'edr_parameter_4',
'edr_quantity',
'edr_reject_reason',
'edr_status',
'edr_subscription_id',
'edr_parameter_5',
'edr_parameter_6',
'edr_parameter_7',
'edr_parameter_8',
'edr_parameter_9',
'edr_date_parameter_1',
'edr_date_parameter_2',
'edr_date_parameter_3',
'edr_date_parameter_4',
'edr_date_parameter_5',
'edr_decimal_parameter_1',
'edr_decimal_parameter_2',
'edr_decimal_parameter_3',
'edr_decimal_parameter_4',
'edr_decimal_parameter_5',
'edr_access_code',
'edr_header_edr_id',
'edr_extra_parameter',
'access_user_id'
]
}
mutate {
add_field => { "[@metadata][type]" => "edr" }
}
}
}
output {
if [@metadata][type] == 'wallet_operation' {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "simfony-wallet-billing"
}
} else if [@metadata][type] == 'edr' {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "simfony-edr-billing"
}
}
}

107
conf.d/cdr.conf Normal file
View File

@ -0,0 +1,107 @@
input { pipeline { address => "Simfony_CDR" } }
filter {
csv {
columns => [
"Timestamp",
"Units",
"Access",
"Param1_TelserviceCode",
"Param2_Tadig",
"Param3_Country",
"Param4_Zone",
"Param5_Destination",
"Param6_LAC",
"Param7_CID",
"Param8_ChargingID",
"Param9_MVNOID",
"Param10_StartDate",
"Param11_EndDate",
"Param12",
"Param13",
"Param14",
"Param15_RatingGroup",
"Param16_SessionDuration",
"Param17",
"Param18",
"Param19_CDRType",
"ExtraParam1_PdpAddress",
"ExtraParam2_SessionId",
"ExtraParam3_CreditControlRequestType",
"ExtraParam4_CreditControlRequestNumber",
"ExtraParam5_Provider"
]
separator => ";"
}
date {
match => [ "Timestamp", "ISO8601" ]
target => "Timestamp"
}
date {
match => [ "Param10_StartDate", "ISO8601" ]
target => "Param10_StartDate"
}
date {
match => [ "Param11_EndDate", "ISO8601" ]
target => "Param11_EndDate"
}
mutate {
split => ["[host][name]", "."]
replace => ["[host][name]", "%{[host][name][0]}"]
rename => { "[host][name]" => "hostname" }
}
# Column28 is added because of the last delimiter ';'. No field intended. It has to be droped
mutate {
remove_field => [ "@timestamp" ]
remove_field => [ "Param12", "Param13", "Param14", "Param17", "Param18", "column28" ]
remove_field => [ "[agent]" ]
remove_field => [ "[ecs][version]" ]
remove_field => [ "[host][architecture]" ]
remove_field => [ "[host][containerized]" ]
remove_field => [ "[host][hostname]" ]
remove_field => [ "[host][name]" ]
remove_field => [ "[host][id]" ]
remove_field => [ "[host][mac]" ]
remove_field => [ "[host][os][name]" ]
remove_field => [ "[host][os][codename]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[host][os][platform]" ]
remove_field => [ "[host][os][version]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[log][offset]"]
}
}
output {
if "cdr" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
ilm_enabled => true
ilm_rollover_alias => "simfony-cdr"
ilm_policy => "simfony-cdr"
ilm_pattern => "{now/d}-000001"
}
}
else if "cdr-prepay" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
ilm_enabled => true
ilm_rollover_alias => "simfony-cdr-prepay"
ilm_policy => "simfony-cdr"
ilm_pattern => "{now/d}-000001"
}
}
}

View File

@ -0,0 +1,28 @@
# intake.conf
input {
beats { port => 5044 }
}
output {
if "ocs" in [tags] or "diameter" in [tags] or "aaa" in [tags] or "hss" in [tags] or "meveo" in [tags] or "meveo" in [tags] or "dra" in [tags] or "hlr" in [tags] {
pipeline { send_to => ["Simfony_Mobility_Logs"] }
}
# if "qa_ocs" in [tags] or "qa_aaa" in [tags] {
# pipeline { send_to => ["qa_mobility_logs"] }
# }
# if "cdr" in [tags] or "cdr-prepay" in [tags] {
# pipeline { send_to => ["Simfony_CDR"] }
# }
# if "ipam-monitoring" in [tags] {
# pipeline { send_to => ["Simfony_IPAM_Monitoring"] }
# }
# if "tks_bbs" in [tags] {
# pipeline { send_to => ["TKS_BBS"] }
# }
# if "notification" in [tags] {
# pipeline { send_to => ["testradius"] }
# }
}

View File

@ -0,0 +1,62 @@
input { pipeline { address => "Simfony_IPAM_Monitoring" } }
filter {
if "new" in [tags] {
csv {
autodetect_column_names => true
separator => ","
id => "New_commun_core"
}
} else if "old" in [tags] {
csv {
autodetect_column_names => true
separator => ","
id => "Old_commun_core"
}
}
mutate {
split => ["[host][name]", "."]
replace => ["[host][name]", "%{[host][name][0]}"]
rename => { "[host][name]" => "hostname" }
}
mutate {
remove_field => [ "[agent]" ]
remove_field => [ "[ecs][version]" ]
remove_field => [ "[host][architecture]" ]
remove_field => [ "[host][containerized]" ]
remove_field => [ "[host][hostname]" ]
remove_field => [ "[host][name]" ]
remove_field => [ "[host][id]" ]
remove_field => [ "[host][mac]" ]
remove_field => [ "[host][os][name]" ]
remove_field => [ "[host][os][codename]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[host][os][platform]" ]
remove_field => [ "[host][os][version]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[log][offset]"]
}
}
output {
if "old" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "simfony-ipam-monitoring-old"
}
} else if "new" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "simfony-ipam-monitoring-new"
}
}
}

0
conf.d/logstash_internal Normal file
View File

70
conf.d/mtqa.conf Normal file
View File

@ -0,0 +1,70 @@
# intake.conf
input { pipeline { address => "qa_mobility_logs" } }
filter {
if "qa_ocs" in [tags] {
clone {
clones => ["notification-ocs"]
add_tag => [ "notification-ocs" ]
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
if "notification-ocs" in [tags] {
# ruby {
# code => 'puts "Input rule matched: contains ocs-notification-v1"'
# }
if [message] =~ /\[OCS-NOTIFICATION\]/ {
# Keep only lines containing "notification-v1"
if [message] =~ /mtqa_machinestalk|qa_v2_ip3labs|qa_qa_tenant/ {
# simfony
mutate {
add_tag => ["notification_mtqa"]
}
}
} else {
drop {} # Drop all other lines
}
}
} else if "mtqa_aaa" in [tags] {
clone {
clones => ["notification-aaa"]
add_tag => [ "notification-aaa" ]
}
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"message" => [
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{FREERADIUS_LOGTYPE:log-plugin}:%{SPACE}%{GREEDYDATA:log-message}",
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{GREEDYDATA:log-message}"
]
}
}
if "notification-aaa" in [tags] {
if [message] =~ /notification-v1/ {
# Keep only lines containing "notification-v1"
if [message] =~ /mtqa_machinestalk|qa_v2_ip3labs|qa_qa_tenant/ {
# simfony
mutate {
add_tag => ["notification_mtqa"]
}
}
} else {
drop {} # Drop all other lines
}
}
}
}
output {
if "notification_mtqa" in [tags] {
kafka {
bootstrap_servers => "172.20.64.140:9092"
topic_id => "notification_mtqa"
codec => json
}
}
}

View File

@ -0,0 +1,30 @@
input {
# Read all documents from Elasticsearch matching the given query
# Stefan: I got tired of searching for how to add that so I created a local apache2 instance with mod_proxy sending all request to elastic and adding the parameter. You'll find the configuration in /etc/apache2/sites-enabled/000-default.conf
elasticsearch {
# hosts => "http://localhost:80/elastic/"
hosts => "http://77.68.122.54:9200"
index => "*"
size => 10000
scroll => "2m"
docinfo => true
query => '{"query": { "range": { "date_timestamp": { "time_zone": "+02:00","gte": "2021-02-05T10:41:00.000Z","lte": "now" }}}}'
user => "elastic"
password => 'bsdB~(7X3bHNz!B*'
ssl => false
}
}
output {
elasticsearch {
hosts => ["https://10.12.174.30:9200"]
index => "%{[@metadata][_index]}"
document_type => "%{[@metadata][_type]}"
document_id => "%{[@metadata][_id]}"
user => "elastic"
password => "5EBucabWNjvFH6E5hb5eTQPfM4bgYqsQ"
ssl_certificate_verification => false
timeout => 3
resurrect_delay => 3
}
}

View File

@ -0,0 +1,70 @@
input {
tcp {
port => 5000
}
}
filter {
if [message] =~ "\tat" {
grok {
match => ["message", "^(\tat)"]
add_tag => ["stacktrace"]
}
}
# grok {
# match => [ "message",
# "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?<logmessage>.*)"
# ]
# }
json {
source => "message"
target => "logInfo"
}
mutate {
add_field => {
"level" => "%{[logInfo][level]}"
"appName" => "%{[logInfo][app_name]}"
"thread" => "%{[logInfo][thread_name]}"
"loggerName" => "%{[logInfo][logger_name]}"
"logMessage" => "%{[logInfo][message]}"
"logtimestamp" => "%{[logInfo][@timestamp]}"
}
}
if ([level] == "ERROR") {
mutate {
add_field => {
"stackTrace" => "%{[logInfo][stack_trace]}"
}
}
}
# if [logInfo][stack_trace] != "" {
# mutate {
# "stackTrace" => "%{[logInfo][stack_trace]}"
# }
# }
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}
mutate {
# remove_field => ["logInfo","message"]
}
}
output {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "notification-gateway-%{+YYYY.MM.dd}"
}
file {
path => "/home/ubuntu/test_logstash_syslog"
}
}

View File

@ -0,0 +1,104 @@
input {
beats {
port => "5044"
}
}
filter {
if "ocs" in [tags] or "hlr" in [tags] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
}
if "aaa" in [tags] {
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"message" => [
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{FREERADIUS_LOGTYPE:log-plugin}:%{SPACE}%{GREEDYDATA:log-message}",
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{GREEDYDATA:log-message}"
]
}
}
}
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"log-message" => "IMSI\s*=\s*(%{IMSI:imsi})"
}
tag_on_failure => []
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS", "EEE MMM dd HH:mm:ss YYY", "EEE MMM d HH:mm:ss YYYY" ]
target => "timestamp"
}
mutate {
split => ["[host][name]", "."]
}
mutate {
replace => ["[host][name]", "%{[host][name][0]}"]
}
mutate{
rename => { "[host][name]" => "hostname" }
}
mutate {
remove_field => [ "@timestamp" ]
remove_field => [ "message" ]
remove_field => [ "[agent][ephemeral_id]" ]
remove_field => [ "[agent][hostname]" ]
remove_field => [ "[agent][id]" ]
remove_field => [ "[agent][name]" ]
remove_field => [ "[agent][type]" ]
remove_field => [ "[agent][version]" ]
remove_field => [ "[ecs][version]" ]
remove_field => [ "[host][architecture]" ]
remove_field => [ "[host][containerized]" ]
remove_field => [ "[host][hostname]" ]
remove_field => [ "[host][id]" ]
remove_field => [ "[host][mac]" ]
remove_field => [ "[host][os][name]" ]
remove_field => [ "[host][os][codename]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[host][os][platform]" ]
remove_field => [ "[host][os][version]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[log][offset]"]
}
}
output {
if "ocs" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
ilm_enabled => true
ilm_rollover_alias => "simfony-mobility-ocs-log"
ilm_policy => "simfony-log-hot-warm"
ilm_pattern => "{now/d}-1"
}
} else if "hlr" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
ilm_enabled => true
ilm_rollover_alias => "simfony-mobility-hlr-log"
ilm_policy => "simfony-log-hot-warm"
ilm_pattern => "{now/d}-1"
}
} else if "aaa" in [tags] {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
ilm_enabled => true
ilm_rollover_alias => "simfony-mobility-aaa-log"
ilm_policy => "simfony-log-hot-warm"
ilm_pattern => "{now/d}-1"
}
}
}

31
conf.d/syslog.conf Normal file
View File

@ -0,0 +1,31 @@
input {
syslog {
port => 6005
grok_pattern => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
id => "syslog"
}
}
filter {
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM dd HH:mm:ss", "MMM dd HH:mm:ss" ]
target => "syslog_timestamp"
}
mutate {
remove_field => [ "severity", "severity_label", "priority", "facility", "message", "@timestamp" ]
}
}
output {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
ilm_enabled => true
ilm_rollover_alias => "simfony-syslog"
ilm_policy => "simfony-syslog"
ilm_pattern => "000001"
}
}

View File

@ -0,0 +1,13 @@
# intake.conf
input { pipeline { address => "testradius" } }
output{
if "notification" in [tags] {
kafka {
bootstrap_servers => "10.12.174.50:9092"
topic_id => "testnotification"
}
}
stdout { codec => rubydebug }
}

89
conf.d/tks_bbs.conf Normal file
View File

@ -0,0 +1,89 @@
input { pipeline { address => "TKS_BBS" } }
filter {
csv {
separator => ";"
autodetect_column_names => true
}
prune {
whitelist_names => [
"OU ID",
"Account Type",
"Discounted Onnet Call",
"Onnet Call",
"Onnet Call Discount",
"Discounted Offnet call",
"Offnet Call",
"Offnet Call Discount",
"Discounted International Roaming Call",
"International Roaming Call",
"International Roaming Call Discount",
"Discounted International Roaming SMS",
"International Roaming SMS",
"International Roaming SMS Discount",
"Capped Data Roaming",
"Bill Delivery Fee",
"SimpTopUp",
"Layanan Digital",
"Layanan Banking",
"Dispute Adjustment",
"Stamp Duty ID",
"M2M Subscription Fee",
"M2M Usage",
"M2M Application",
"M2M Hardware",
"M2M Discount",
"M2M Other",
"Enterprise Solution Package Code",
"Enterprise Solution Recurring Charge",
"Enterprise Solution Installment Charge",
"Enterprise Solution Penalty",
"GPRS Installment",
"GPRS Penalty",
"Device Penalty",
"BA Cancellation Date",
"International Service Package Code",
"International Service Package RC/OC",
"International Service Package UC",
"Add On / Toping Package Code",
"Add On / Toping Charges",
"Flash Abonemen RC/OC",
"Flash Abonemen UC",
"iPhone Abonemen RC/OC",
"iPhone Abonemen UC",
"Other Discount",
"Waive Indicator"
]
}
mutate {
remove_field => [ "[agent]" ]
remove_field => [ "[ecs][version]" ]
remove_field => [ "[host][architecture]" ]
remove_field => [ "[host][containerized]" ]
remove_field => [ "[host][hostname]" ]
remove_field => [ "[host][name]" ]
remove_field => [ "[host][id]" ]
remove_field => [ "[host][mac]" ]
remove_field => [ "[host][os][name]" ]
remove_field => [ "[host][os][codename]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[host][os][platform]" ]
remove_field => [ "[host][os][version]" ]
remove_field => [ "[host][os][family]" ]
remove_field => [ "[host][os][kernel]" ]
remove_field => [ "[log][offset]"]
}
}
output {
elasticsearch {
hosts => ["http://10.12.174.15:9200","http://10.12.174.16:9200","http://10.12.174.17:9200"]
user => "logstash_internal"
password => "vK54tBEFUZaKBVtwsmlHksbr07Rm8cTn"
index => "test-bbs_v2"
}
}

13
data/intake-filebeat.conf Normal file
View File

@ -0,0 +1,13 @@
# intake.conf
input {
beats { port => 5044 }
}
output {
if "mtqa_ocs" in [tags] or "mtqa_aaa" in [tags] {
pipeline { send_to => ["mtqa_mobility_logs"] }
}
}

70
data/mtqa.conf Normal file
View File

@ -0,0 +1,70 @@
# intake.conf
input { pipeline { address => "mtqa_mobility_logs" } }
filter {
if "mtqa_ocs" in [tags] {
clone {
clones => ["notification-ocs"]
add_tag => [ "notification-ocs" ]
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{DATA:issuer}%{SPACE}\(%{DATA:pool}\)%{SPACE}%{GREEDYDATA:log-message}" }
}
if "notification-ocs" in [tags] {
# ruby {
# code => 'puts "Input rule matched: contains ocs-notification-v1"'
# }
if [message] =~ /\[OCS-NOTIFICATION\]/ {
# Keep only lines containing "notification-v1"
if [message] =~ /mtqa_machinestalk|qa_v2_ip3labs|qa_qa_tenant/ {
# simfony
mutate {
add_tag => ["notification_mtqa"]
}
}
} else {
drop {} # Drop all other lines
}
}
} else if "mtqa_aaa" in [tags] {
clone {
clones => ["notification-aaa"]
add_tag => [ "notification-aaa" ]
}
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {
"message" => [
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{FREERADIUS_LOGTYPE:log-plugin}:%{SPACE}%{GREEDYDATA:log-message}",
"%{FREERADIUS_DATE:timestamp}%{SPACE}:%{SPACE}%{FREERADIUS_LOGTYPE:log-level}:%{SPACE}%{GREEDYDATA:log-message}"
]
}
}
if "notification-aaa" in [tags] {
if [message] =~ /notification-v1/ {
# Keep only lines containing "notification-v1"
if [message] =~ /mtqa_machinestalk|qa_v2_ip3labs|qa_qa_tenant/ {
# simfony
mutate {
add_tag => ["notification_mtqa"]
}
}
} else {
drop {} # Drop all other lines
}
}
}
}
output {
if "notification_mtqa" in [tags] {
kafka {
bootstrap_servers => "172.20.64.140:9092"
topic_id => "notification_mtqa"
codec => json
}
}
}

18
docker-compose.yml Normal file
View File

@ -0,0 +1,18 @@
version: "2.2"
services:
logstash:
restart: always
container_name: logstash
image: docker.elastic.co/logstash/logstash:7.10.1
ports:
- "172.20.64.140:9600:9600"
- "172.20.64.140:5044:5044"
volumes:
- ./settings:/usr/share/logstash/config:z
- ./data:/etc/logstash/conf.d:z
- ./patterns:/etc/logstash/patterns:z
networks:
- portal_qa
networks:
portal_qa:
external: true

81
jvm.options Normal file
View File

@ -0,0 +1,81 @@
## JVM configuration
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms4g
-Xmx4g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
#-Duser.language=en
# Set the locale country
#-Duser.country=US
# Set the locale variant, if any
#-Duser.variant=
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
#-Djna.nosys=true
# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0
# Make sure joni regexp interruptability is enabled
-Djruby.regexp.interruptible=true
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
# Copy the logging context from parent threads to children
-Dlog4j2.isThreadContextMapInheritable=true

BIN
libs/postgresql-42.2.20.jar Normal file

Binary file not shown.

17
logstash-sample.conf Normal file
View File

@ -0,0 +1,17 @@
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}

2
patterns/custom_pattern Normal file
View File

@ -0,0 +1,2 @@
IMSI [0-9]{15}
USER [0-9a-zA-Z\-]*

View File

@ -0,0 +1,6 @@
# Patterns for freeradius
# example: https://github.com/mcnewton/elk/blob/master/grok-patterns/freeradius
FREERADIUS_DATE %{DAY} %{MONTH} ?%{MONTHDAY} %{TIME} %{YEAR}
FREERADIUS_LOGTYPE Auth|Info|Error|Proxy|rlm_perl

165
settings/log4j2.properties Normal file
View File

@ -0,0 +1,165 @@
status = error
name = LogstashPropertiesConfig
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30
appender.rolling.avoid_pipelined_filter.type = ScriptFilter
appender.rolling.avoid_pipelined_filter.script.type = Script
appender.rolling.avoid_pipelined_filter.script.name = filter_no_pipelined
appender.rolling.avoid_pipelined_filter.script.language = JavaScript
appender.rolling.avoid_pipelined_filter.script.scriptText = ${sys:ls.pipeline.separate_logs} == false || !(logEvent.getContextData().containsKey("pipeline.id"))
appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling.policies.size.size = 100MB
appender.json_rolling.strategy.type = DefaultRolloverStrategy
appender.json_rolling.strategy.max = 30
appender.json_rolling.avoid_pipelined_filter.type = ScriptFilter
appender.json_rolling.avoid_pipelined_filter.script.type = Script
appender.json_rolling.avoid_pipelined_filter.script.name = filter_no_pipelined
appender.json_rolling.avoid_pipelined_filter.script.language = JavaScript
appender.json_rolling.avoid_pipelined_filter.script.scriptText = ${sys:ls.pipeline.separate_logs} == false || !(logEvent.getContextData().containsKey("pipeline.id"))
appender.routing.type = Routing
appender.routing.name = pipeline_routing_appender
appender.routing.routes.type = Routes
appender.routing.routes.script.type = Script
appender.routing.routes.script.name = routing_script
appender.routing.routes.script.language = JavaScript
appender.routing.routes.script.scriptText = logEvent.getContextData().containsKey("pipeline.id") ? logEvent.getContextData().getValue("pipeline.id") : "sink";
appender.routing.routes.route_pipelines.type = Route
appender.routing.routes.route_pipelines.rolling.type = RollingFile
appender.routing.routes.route_pipelines.rolling.name = appender-${ctx:pipeline.id}
appender.routing.routes.route_pipelines.rolling.fileName = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.log
appender.routing.routes.route_pipelines.rolling.filePattern = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.%i.log.gz
appender.routing.routes.route_pipelines.rolling.layout.type = PatternLayout
appender.routing.routes.route_pipelines.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.routing.routes.route_pipelines.rolling.policy.type = SizeBasedTriggeringPolicy
appender.routing.routes.route_pipelines.rolling.policy.size = 100MB
appender.routing.routes.route_pipelines.strategy.type = DefaultRolloverStrategy
appender.routing.routes.route_pipelines.strategy.max = 30
appender.routing.routes.route_sink.type = Route
appender.routing.routes.route_sink.key = sink
appender.routing.routes.route_sink.null.type = Null
appender.routing.routes.route_sink.null.name = drop-appender
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
rootLogger.appenderRef.routing.ref = pipeline_routing_appender
# Slowlog
appender.console_slowlog.type = Console
appender.console_slowlog.name = plain_console_slowlog
appender.console_slowlog.layout.type = PatternLayout
appender.console_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console_slowlog.type = Console
appender.json_console_slowlog.name = json_console_slowlog
appender.json_console_slowlog.layout.type = JSONLayout
appender.json_console_slowlog.layout.compact = true
appender.json_console_slowlog.layout.eventEol = true
appender.rolling_slowlog.type = RollingFile
appender.rolling_slowlog.name = plain_rolling_slowlog
appender.rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_slowlog.policies.type = Policies
appender.rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_slowlog.policies.time.interval = 1
appender.rolling_slowlog.policies.time.modulate = true
appender.rolling_slowlog.layout.type = PatternLayout
appender.rolling_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_slowlog.policies.size.size = 100MB
appender.rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.rolling_slowlog.strategy.max = 30
appender.json_rolling_slowlog.type = RollingFile
appender.json_rolling_slowlog.name = json_rolling_slowlog
appender.json_rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling_slowlog.policies.type = Policies
appender.json_rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.time.interval = 1
appender.json_rolling_slowlog.policies.time.modulate = true
appender.json_rolling_slowlog.layout.type = JSONLayout
appender.json_rolling_slowlog.layout.compact = true
appender.json_rolling_slowlog.layout.eventEol = true
appender.json_rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.size.size = 100MB
appender.json_rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.json_rolling_slowlog.strategy.max = 30
logger.slowlog.name = slowlog
logger.slowlog.level = trace
logger.slowlog.appenderRef.console_slowlog.ref = ${sys:ls.log.format}_console_slowlog
logger.slowlog.appenderRef.rolling_slowlog.ref = ${sys:ls.log.format}_rolling_slowlog
logger.slowlog.additivity = false
logger.licensereader.name = logstash.licensechecker.licensereader
logger.licensereader.level = error
# Silence http-client by default
logger.apache_http_client.name = org.apache.http
logger.apache_http_client.level = fatal
# Deprecation log
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_plain_rolling
appender.deprecation_rolling.fileName = ${sys:ls.logs}/logstash/logstash-deprecation.log
appender.deprecation_rolling.filePattern = ${sys:ls.logs}/logstash-deprecation-%d{yyyy-MM-dd}-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.deprecation_rolling.policies.time.interval = 1
appender.deprecation_rolling.policies.time.modulate = true
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 100MB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 30
logger.deprecation.name = org.logstash.deprecation, deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation.additivity = false
logger.deprecation_root.name = deprecation
logger.deprecation_root.level = WARN
logger.deprecation_root.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation_root.additivity = false

292
settings/logstash.yml Normal file
View File

@ -0,0 +1,292 @@
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
pipeline.batch.size: 125
pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
node.name: logstash-01
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
path.data: LOGSTASH_HOME/data
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
#pipeline.id: mobililty-log
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
#pipeline.workers: 3
#
# How many events to retrieve from inputs before sending to filters+workers
#
#pipeline.batch.size: 7000
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
#pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
# "auto" will automatically enable ordering if the 'pipeline.workers' setting
# is also set to '1'.
# "true" will enforce ordering on the pipeline and prevent logstash from starting
# if there are multiple workers.
# "false" will disable any extra processing necessary for preserving ordering.
#
#pipeline.ordered: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
config.reload.automatic: true
#
# How often to check if the pipeline configuration has changed (in seconds)
# Note that the unit value (s) is required. Values without a qualifier (e.g. 60)
# are treated as nanoseconds.
# Setting the interval this way is not recommended and might change in later versions.
#
config.reload.interval: 120s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
#config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ HTTP API Settings -------------
# Define settings related to the HTTP API here.
#
# The HTTP API is enabled by default. It can be disabled, but features that rely
# on it will not work as intended.
# http.enabled: true
#
# By default, the HTTP API is bound to only the host's local loopback interface,
# ensuring that it is not accessible to the rest of the network. Because the API
# includes neither authentication nor authorization and has not been hardened or
# tested for use as a publicly-reachable API, binding to publicly accessible IPs
# should be avoided where possible.
#
# http.host: 127.0.0.1
#
# The HTTP API web server will listen on an available port from the given range.
# Values can be specified as a single port (e.g., `9600`), or an inclusive range
# of ports (e.g., `9600-9700`).
#
# http.port: 9600-9700
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
queue.type: persisted
#queue.type: memory
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
#path.queue: path.data/queue
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
queue.max_bytes: 4gb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
queue.checkpoint.writes: 1024
#/aueu
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
# If using dead_letter_queue.enable: true, the interval in milliseconds where if no further events eligible for the DLQ
# have been created, a dead letter queue file will be written. A low value here will mean that more, smaller, queue files
# may be written, while a larger value will introduce more latency between items being "written" to the dead letter queue, and
# being available to be read by the dead_letter_queue input when items are are written infrequently.
# Default is 5000.
#
# dead_letter_queue.flush_interval: 5000
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
http.port: 9600
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
log.level: info
path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# Flag to output log lines of each pipeline in its separate log file. Each log filename contains the pipeline.name
# Default is false
# pipeline.separate_logs: false
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: xW8DTQG69Zrxy7hx
#xpack.monitoring.elasticsearch.proxy: ["http://proxy:port"]
xpack.monitoring.elasticsearch.hosts: ["http://172.20.64.140:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.monitoring.elasticsearch.cloud_id: monitoring_cluster_id:xxxxxxxxxx
#xpack.monitoring.elasticsearch.cloud_auth: logstash_system:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.monitoring.elasticsearch.api_key: "id:api_key"
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.proxy: ["http://proxy:port"]
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.management.elasticsearch.cloud_id: management_cluster_id:xxxxxxxxxx
#xpack.management.elasticsearch.cloud_auth: logstash_admin_user:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.management.elasticsearch.api_key: "id:api_key"
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

23
settings/pipelines.yml Normal file
View File

@ -0,0 +1,23 @@
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
#- pipeline.id: Simfony_Mobility_Logs
# path.config: "/etc/logstash/conf.d/1*.conf"
# pipeline.workers: 5
# pipeline.batch.size: 1000
#pipeline.batch.delay: 50
# pipeline.ordered: false
- pipeline.id: Simfony_Filebeat_Server_test2
path.config: "/etc/logstash/conf.d/intake-filebeat.conf"
pipeline.workers: 2
pipeline.batch.size: 500
- pipeline.id: mtqa_mobility_logs
path.config: "/etc/logstash/conf.d/mtqa.conf"
pipeline.workers: 5
pipeline.batch.size: 1000
pipeline.batch.delay: 50
pipeline.ordered: false

53
startup.options Normal file
View File

@ -0,0 +1,53 @@
################################################################################
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash and is not used by Logstash itself. It should
# automagically use the init system (systemd, upstart, sysv, etc.) that your
# Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################
# Override Java location
#JAVACMD=/usr/bin/java
# Set a home directory
LS_HOME=/usr/share/logstash
# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/etc/logstash
# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
# Arguments to pass to java
LS_JAVA_OPTS=""
# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid
# user and group id to be invoked as
LS_USER=logstash
LS_GROUP=logstash
# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log
# Open file limit
LS_OPEN_FILES=16384
# Nice level
LS_NICE=19
# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"
# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM