ChatGPT-Next-Web/script/main.py
Web4 cffb92c88f
Qubuhub patch 1 (#5)
* Add files via upload

* Create AI

* Create Index.js

* Create devcontainer.json

* Update and rename AI to API.js

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Rename create_RODA AI.py to Main.py

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Rename main.py to Kubu-hai.py

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Delete deployment RODA AI.yaml

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Update and rename .env.template to .env.local

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Delete src-tauri/build.rs

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Delete src-tauri directory

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Delete .husky/pre-commit

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Delete .devcontainer/devcontainer.json

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Rename Index.js to Public/Page/Index.js

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Rename Public/Page/Index.js to public/Index.js

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Update and rename __init__.py to main.py

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Update and rename Main.py to script/__init__.js

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Rename Kubu-hai.py to main.py

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

* Delete package.json

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>

---------

Signed-off-by: Web4 <137041369+QUBUHUB@users.noreply.github.com>
2025-03-04 00:01:59 -08:00

39 lines
1.1 KiB
Python

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
import pandas as pd
from sklearn.model_selection import train_test_split
# Load your dataset
data = pd.read_csv('path/to/your/dataset.csv')
# Assuming the last column is the label
labels = data.iloc[:, -1]
features = data.iloc[:, :-1]
# Convert labels to categorical if necessary
labels = to_categorical(labels)
# Split the data into training and testing sets
train_data, test_data, train_labels, test_labels = train_test_split(features, labels, test_size=0.2)
# Build the model
model = Sequential([
Dense(64, input_shape=(features.shape[1],), activation='relu'),
Dense(64, activation='relu'),
Dense(labels.shape[1], activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(train_data, train_labels, epochs=50, batch_size=32)
# Evaluate the model
loss, accuracy = model.evaluate(test_data, test_labels)
print(f'Test accuracy: {accuracy}')